00:00:00.001  Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 1059
00:00:00.001  originally caused by:
00:00:00.001   Started by upstream project "nightly-trigger" build number 3721
00:00:00.001   originally caused by:
00:00:00.001    Started by timer
00:00:00.134  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy
00:00:00.135  The recommended git tool is: git
00:00:00.135  using credential 00000000-0000-0000-0000-000000000002
00:00:00.136   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:00.166  Fetching changes from the remote Git repository
00:00:00.168   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:00.193  Using shallow fetch with depth 1
00:00:00.193  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:00.193   > git --version # timeout=10
00:00:00.232   > git --version # 'git version 2.39.2'
00:00:00.232  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:00.258  Setting http proxy: proxy-dmz.intel.com:911
00:00:00.258   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5
00:00:07.205   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:07.216   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:07.226  Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD)
00:00:07.226   > git config core.sparsecheckout # timeout=10
00:00:07.238   > git read-tree -mu HEAD # timeout=10
00:00:07.254   > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5
00:00:07.276  Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag"
00:00:07.276   > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10
00:00:07.384  [Pipeline] Start of Pipeline
00:00:07.399  [Pipeline] library
00:00:07.401  Loading library shm_lib@master
00:00:07.401  Library shm_lib@master is cached. Copying from home.
00:00:07.417  [Pipeline] node
00:00:07.432  Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest
00:00:07.433  [Pipeline] {
00:00:07.444  [Pipeline] catchError
00:00:07.446  [Pipeline] {
00:00:07.459  [Pipeline] wrap
00:00:07.468  [Pipeline] {
00:00:07.477  [Pipeline] stage
00:00:07.479  [Pipeline] { (Prologue)
00:00:07.500  [Pipeline] echo
00:00:07.501  Node: VM-host-SM0
00:00:07.509  [Pipeline] cleanWs
00:00:07.517  [WS-CLEANUP] Deleting project workspace...
00:00:07.517  [WS-CLEANUP] Deferred wipeout is used...
00:00:07.523  [WS-CLEANUP] done
00:00:07.712  [Pipeline] setCustomBuildProperty
00:00:07.786  [Pipeline] httpRequest
00:00:08.146  [Pipeline] echo
00:00:08.148  Sorcerer 10.211.164.20 is alive
00:00:08.154  [Pipeline] retry
00:00:08.160  [Pipeline] {
00:00:08.172  [Pipeline] httpRequest
00:00:08.176  HttpMethod: GET
00:00:08.177  URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:08.178  Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:08.191  Response Code: HTTP/1.1 200 OK
00:00:08.192  Success: Status code 200 is in the accepted range: 200,404
00:00:08.192  Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:13.579  [Pipeline] }
00:00:13.594  [Pipeline] // retry
00:00:13.601  [Pipeline] sh
00:00:13.878  + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:13.894  [Pipeline] httpRequest
00:00:14.247  [Pipeline] echo
00:00:14.249  Sorcerer 10.211.164.20 is alive
00:00:14.259  [Pipeline] retry
00:00:14.261  [Pipeline] {
00:00:14.275  [Pipeline] httpRequest
00:00:14.280  HttpMethod: GET
00:00:14.280  URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz
00:00:14.281  Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz
00:00:14.303  Response Code: HTTP/1.1 200 OK
00:00:14.303  Success: Status code 200 is in the accepted range: 200,404
00:00:14.304  Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz
00:01:45.288  [Pipeline] }
00:01:45.306  [Pipeline] // retry
00:01:45.314  [Pipeline] sh
00:01:45.594  + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz
00:01:48.139  [Pipeline] sh
00:01:48.419  + git -C spdk log --oneline -n5
00:01:48.420  e01cb43b8 mk/spdk.common.mk sed the minor version
00:01:48.420  d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state
00:01:48.420  2104eacf0 test/check_so_deps: use VERSION to look for prior tags
00:01:48.420  66289a6db build: use VERSION file for storing version
00:01:48.420  626389917 nvme/rdma: Don't limit max_sge if UMR is used
00:01:48.438  [Pipeline] withCredentials
00:01:48.448   > git --version # timeout=10
00:01:48.461   > git --version # 'git version 2.39.2'
00:01:48.477  Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS
00:01:48.479  [Pipeline] {
00:01:48.489  [Pipeline] retry
00:01:48.491  [Pipeline] {
00:01:48.505  [Pipeline] sh
00:01:48.786  + git ls-remote http://dpdk.org/git/dpdk-stable v23.11
00:01:54.081  [Pipeline] }
00:01:54.098  [Pipeline] // retry
00:01:54.103  [Pipeline] }
00:01:54.118  [Pipeline] // withCredentials
00:01:54.127  [Pipeline] httpRequest
00:01:55.136  [Pipeline] echo
00:01:55.138  Sorcerer 10.211.164.20 is alive
00:01:55.148  [Pipeline] retry
00:01:55.149  [Pipeline] {
00:01:55.163  [Pipeline] httpRequest
00:01:55.168  HttpMethod: GET
00:01:55.169  URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz
00:01:55.169  Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz
00:01:55.180  Response Code: HTTP/1.1 200 OK
00:01:55.181  Success: Status code 200 is in the accepted range: 200,404
00:01:55.181  Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz
00:02:10.080  [Pipeline] }
00:02:10.100  [Pipeline] // retry
00:02:10.108  [Pipeline] sh
00:02:10.390  + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz
00:02:11.780  [Pipeline] sh
00:02:12.061  + git -C dpdk log --oneline -n5
00:02:12.061  eeb0605f11 version: 23.11.0
00:02:12.061  238778122a doc: update release notes for 23.11
00:02:12.061  46aa6b3cfc doc: fix description of RSS features
00:02:12.061  dd88f51a57 devtools: forbid DPDK API in cnxk base driver
00:02:12.061  7e421ae345 devtools: support skipping forbid rule check
00:02:12.078  [Pipeline] writeFile
00:02:12.093  [Pipeline] sh
00:02:12.375  + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh
00:02:12.387  [Pipeline] sh
00:02:12.675  + cat autorun-spdk.conf
00:02:12.675  SPDK_RUN_FUNCTIONAL_TEST=1
00:02:12.675  SPDK_TEST_NVMF=1
00:02:12.675  SPDK_TEST_NVMF_TRANSPORT=tcp
00:02:12.675  SPDK_TEST_VFIOUSER=1
00:02:12.675  SPDK_TEST_USDT=1
00:02:12.675  SPDK_RUN_UBSAN=1
00:02:12.675  SPDK_TEST_NVMF_MDNS=1
00:02:12.675  NET_TYPE=virt
00:02:12.675  SPDK_JSONRPC_GO_CLIENT=1
00:02:12.675  SPDK_TEST_NATIVE_DPDK=v23.11
00:02:12.675  SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build
00:02:12.675  SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:02:12.735  RUN_NIGHTLY=1
00:02:12.737  [Pipeline] }
00:02:12.750  [Pipeline] // stage
00:02:12.762  [Pipeline] stage
00:02:12.764  [Pipeline] { (Run VM)
00:02:12.774  [Pipeline] sh
00:02:13.051  + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh
00:02:13.051  + echo 'Start stage prepare_nvme.sh'
00:02:13.051  Start stage prepare_nvme.sh
00:02:13.051  + [[ -n 1 ]]
00:02:13.051  + disk_prefix=ex1
00:02:13.051  + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]]
00:02:13.051  + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]]
00:02:13.051  + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf
00:02:13.051  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:02:13.051  ++ SPDK_TEST_NVMF=1
00:02:13.051  ++ SPDK_TEST_NVMF_TRANSPORT=tcp
00:02:13.051  ++ SPDK_TEST_VFIOUSER=1
00:02:13.051  ++ SPDK_TEST_USDT=1
00:02:13.051  ++ SPDK_RUN_UBSAN=1
00:02:13.051  ++ SPDK_TEST_NVMF_MDNS=1
00:02:13.051  ++ NET_TYPE=virt
00:02:13.051  ++ SPDK_JSONRPC_GO_CLIENT=1
00:02:13.051  ++ SPDK_TEST_NATIVE_DPDK=v23.11
00:02:13.051  ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build
00:02:13.051  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:02:13.051  ++ RUN_NIGHTLY=1
00:02:13.051  + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest
00:02:13.051  + nvme_files=()
00:02:13.051  + declare -A nvme_files
00:02:13.051  + backend_dir=/var/lib/libvirt/images/backends
00:02:13.051  + nvme_files['nvme.img']=5G
00:02:13.051  + nvme_files['nvme-cmb.img']=5G
00:02:13.051  + nvme_files['nvme-multi0.img']=4G
00:02:13.051  + nvme_files['nvme-multi1.img']=4G
00:02:13.051  + nvme_files['nvme-multi2.img']=4G
00:02:13.051  + nvme_files['nvme-openstack.img']=8G
00:02:13.051  + nvme_files['nvme-zns.img']=5G
00:02:13.051  + ((  SPDK_TEST_NVME_PMR == 1  ))
00:02:13.051  + ((  SPDK_TEST_FTL == 1  ))
00:02:13.051  + ((  SPDK_TEST_NVME_FDP == 1  ))
00:02:13.051  + [[ ! -d /var/lib/libvirt/images/backends ]]
00:02:13.051  + for nvme in "${!nvme_files[@]}"
00:02:13.051  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G
00:02:13.051  Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc
00:02:13.051  + for nvme in "${!nvme_files[@]}"
00:02:13.051  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G
00:02:13.051  Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc
00:02:13.051  + for nvme in "${!nvme_files[@]}"
00:02:13.051  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G
00:02:13.051  Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc
00:02:13.051  + for nvme in "${!nvme_files[@]}"
00:02:13.051  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G
00:02:13.051  Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc
00:02:13.051  + for nvme in "${!nvme_files[@]}"
00:02:13.051  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G
00:02:13.051  Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc
00:02:13.051  + for nvme in "${!nvme_files[@]}"
00:02:13.051  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G
00:02:13.051  Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc
00:02:13.051  + for nvme in "${!nvme_files[@]}"
00:02:13.051  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G
00:02:13.310  Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc
00:02:13.310  ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu
00:02:13.310  + echo 'End stage prepare_nvme.sh'
00:02:13.310  End stage prepare_nvme.sh
00:02:13.321  [Pipeline] sh
00:02:13.604  + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh
00:02:13.604  Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39
00:02:13.604  
00:02:13.604  DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant
00:02:13.604  SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk
00:02:13.604  VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest
00:02:13.604  HELP=0
00:02:13.604  DRY_RUN=0
00:02:13.604  NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,
00:02:13.604  NVME_DISKS_TYPE=nvme,nvme,
00:02:13.604  NVME_AUTO_CREATE=0
00:02:13.604  NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,
00:02:13.604  NVME_CMB=,,
00:02:13.604  NVME_PMR=,,
00:02:13.604  NVME_ZNS=,,
00:02:13.604  NVME_MS=,,
00:02:13.604  NVME_FDP=,,
00:02:13.604  SPDK_VAGRANT_DISTRO=fedora39
00:02:13.604  SPDK_VAGRANT_VMCPU=10
00:02:13.604  SPDK_VAGRANT_VMRAM=12288
00:02:13.604  SPDK_VAGRANT_PROVIDER=libvirt
00:02:13.604  SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911
00:02:13.604  SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64
00:02:13.604  SPDK_OPENSTACK_NETWORK=0
00:02:13.604  VAGRANT_PACKAGE_BOX=0
00:02:13.604  VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile
00:02:13.604  FORCE_DISTRO=true
00:02:13.604  VAGRANT_BOX_VERSION=
00:02:13.604  EXTRA_VAGRANTFILES=
00:02:13.604  NIC_MODEL=e1000
00:02:13.604  
00:02:13.604  mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt'
00:02:13.604  /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest
00:02:16.135  Bringing machine 'default' up with 'libvirt' provider...
00:02:17.072  ==> default: Creating image (snapshot of base box volume).
00:02:17.072  ==> default: Creating domain with the following settings...
00:02:17.072  ==> default:  -- Name:              fedora39-39-1.5-1721788873-2326_default_1734115608_3400dfa04866363bb1a9
00:02:17.072  ==> default:  -- Domain type:       kvm
00:02:17.072  ==> default:  -- Cpus:              10
00:02:17.072  ==> default:  -- Feature:           acpi
00:02:17.072  ==> default:  -- Feature:           apic
00:02:17.072  ==> default:  -- Feature:           pae
00:02:17.072  ==> default:  -- Memory:            12288M
00:02:17.072  ==> default:  -- Memory Backing:    hugepages: 
00:02:17.072  ==> default:  -- Management MAC:    
00:02:17.072  ==> default:  -- Loader:            
00:02:17.072  ==> default:  -- Nvram:             
00:02:17.072  ==> default:  -- Base box:          spdk/fedora39
00:02:17.072  ==> default:  -- Storage pool:      default
00:02:17.072  ==> default:  -- Image:             /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734115608_3400dfa04866363bb1a9.img (20G)
00:02:17.072  ==> default:  -- Volume Cache:      default
00:02:17.072  ==> default:  -- Kernel:            
00:02:17.072  ==> default:  -- Initrd:            
00:02:17.072  ==> default:  -- Graphics Type:     vnc
00:02:17.072  ==> default:  -- Graphics Port:     -1
00:02:17.072  ==> default:  -- Graphics IP:       127.0.0.1
00:02:17.072  ==> default:  -- Graphics Password: Not defined
00:02:17.072  ==> default:  -- Video Type:        cirrus
00:02:17.072  ==> default:  -- Video VRAM:        9216
00:02:17.072  ==> default:  -- Sound Type:	
00:02:17.072  ==> default:  -- Keymap:            en-us
00:02:17.072  ==> default:  -- TPM Path:          
00:02:17.072  ==> default:  -- INPUT:             type=mouse, bus=ps2
00:02:17.072  ==> default:  -- Command line args: 
00:02:17.072  ==> default:     -> value=-device, 
00:02:17.072  ==> default:     -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 
00:02:17.072  ==> default:     -> value=-drive, 
00:02:17.072  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 
00:02:17.072  ==> default:     -> value=-device, 
00:02:17.072  ==> default:     -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:02:17.072  ==> default:     -> value=-device, 
00:02:17.072  ==> default:     -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 
00:02:17.072  ==> default:     -> value=-drive, 
00:02:17.072  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 
00:02:17.072  ==> default:     -> value=-device, 
00:02:17.072  ==> default:     -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:02:17.072  ==> default:     -> value=-drive, 
00:02:17.072  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 
00:02:17.072  ==> default:     -> value=-device, 
00:02:17.072  ==> default:     -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:02:17.072  ==> default:     -> value=-drive, 
00:02:17.072  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 
00:02:17.072  ==> default:     -> value=-device, 
00:02:17.072  ==> default:     -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:02:17.331  ==> default: Creating shared folders metadata...
00:02:17.331  ==> default: Starting domain.
00:02:19.235  ==> default: Waiting for domain to get an IP address...
00:02:37.320  ==> default: Waiting for SSH to become available...
00:02:37.320  ==> default: Configuring and enabling network interfaces...
00:02:39.854      default: SSH address: 192.168.121.125:22
00:02:39.854      default: SSH username: vagrant
00:02:39.854      default: SSH auth method: private key
00:02:42.459  ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk
00:02:49.028  ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk
00:02:55.589  ==> default: Mounting SSHFS shared folder...
00:02:56.524  ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output
00:02:56.524  ==> default: Checking Mount..
00:02:57.902  ==> default: Folder Successfully Mounted!
00:02:57.902  ==> default: Running provisioner: file...
00:02:58.477      default: ~/.gitconfig => .gitconfig
00:02:59.055  
00:02:59.055    SUCCESS!
00:02:59.055  
00:02:59.055    cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use.
00:02:59.055    Use vagrant "suspend" and vagrant "resume" to stop and start.
00:02:59.055    Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm.
00:02:59.055  
00:02:59.064  [Pipeline] }
00:02:59.079  [Pipeline] // stage
00:02:59.088  [Pipeline] dir
00:02:59.088  Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt
00:02:59.090  [Pipeline] {
00:02:59.102  [Pipeline] catchError
00:02:59.105  [Pipeline] {
00:02:59.117  [Pipeline] sh
00:02:59.471  + vagrant ssh-config --host vagrant
00:02:59.471  + sed -ne /^Host/,$p
00:02:59.471  + tee ssh_conf
00:03:02.756  Host vagrant
00:03:02.756    HostName 192.168.121.125
00:03:02.756    User vagrant
00:03:02.756    Port 22
00:03:02.756    UserKnownHostsFile /dev/null
00:03:02.756    StrictHostKeyChecking no
00:03:02.756    PasswordAuthentication no
00:03:02.756    IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39
00:03:02.756    IdentitiesOnly yes
00:03:02.756    LogLevel FATAL
00:03:02.756    ForwardAgent yes
00:03:02.756    ForwardX11 yes
00:03:02.756  
00:03:02.770  [Pipeline] withEnv
00:03:02.772  [Pipeline] {
00:03:02.785  [Pipeline] sh
00:03:03.064  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash
00:03:03.064  		source /etc/os-release
00:03:03.064  		[[ -e /image.version ]] && img=$(< /image.version)
00:03:03.064  		# Minimal, systemd-like check.
00:03:03.064  		if [[ -e /.dockerenv ]]; then
00:03:03.064  			# Clear garbage from the node's name:
00:03:03.064  			#  agt-er_autotest_547-896 -> autotest_547-896
00:03:03.064  			#  $HOSTNAME is the actual container id
00:03:03.064  			agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_}
00:03:03.064  			if grep -q "/etc/hostname" /proc/self/mountinfo; then
00:03:03.064  				# We can assume this is a mount from a host where container is running,
00:03:03.064  				# so fetch its hostname to easily identify the target swarm worker.
00:03:03.064  				container="$(< /etc/hostname) ($agent)"
00:03:03.064  			else
00:03:03.064  				# Fallback
00:03:03.064  				container=$agent
00:03:03.064  			fi
00:03:03.064  		fi
00:03:03.064  		echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}"
00:03:03.064  
00:03:03.334  [Pipeline] }
00:03:03.349  [Pipeline] // withEnv
00:03:03.358  [Pipeline] setCustomBuildProperty
00:03:03.373  [Pipeline] stage
00:03:03.375  [Pipeline] { (Tests)
00:03:03.391  [Pipeline] sh
00:03:03.672  + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./
00:03:03.943  [Pipeline] sh
00:03:04.221  + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./
00:03:04.495  [Pipeline] timeout
00:03:04.495  Timeout set to expire in 1 hr 0 min
00:03:04.497  [Pipeline] {
00:03:04.511  [Pipeline] sh
00:03:04.790  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard
00:03:05.356  HEAD is now at e01cb43b8 mk/spdk.common.mk sed the minor version
00:03:05.367  [Pipeline] sh
00:03:05.647  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo
00:03:05.918  [Pipeline] sh
00:03:06.201  + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo
00:03:06.474  [Pipeline] sh
00:03:06.754  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo
00:03:07.013  ++ readlink -f spdk_repo
00:03:07.013  + DIR_ROOT=/home/vagrant/spdk_repo
00:03:07.013  + [[ -n /home/vagrant/spdk_repo ]]
00:03:07.013  + DIR_SPDK=/home/vagrant/spdk_repo/spdk
00:03:07.013  + DIR_OUTPUT=/home/vagrant/spdk_repo/output
00:03:07.013  + [[ -d /home/vagrant/spdk_repo/spdk ]]
00:03:07.013  + [[ ! -d /home/vagrant/spdk_repo/output ]]
00:03:07.013  + [[ -d /home/vagrant/spdk_repo/output ]]
00:03:07.013  + [[ nvmf-tcp-vg-autotest == pkgdep-* ]]
00:03:07.013  + cd /home/vagrant/spdk_repo
00:03:07.013  + source /etc/os-release
00:03:07.013  ++ NAME='Fedora Linux'
00:03:07.013  ++ VERSION='39 (Cloud Edition)'
00:03:07.013  ++ ID=fedora
00:03:07.013  ++ VERSION_ID=39
00:03:07.013  ++ VERSION_CODENAME=
00:03:07.013  ++ PLATFORM_ID=platform:f39
00:03:07.013  ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)'
00:03:07.013  ++ ANSI_COLOR='0;38;2;60;110;180'
00:03:07.013  ++ LOGO=fedora-logo-icon
00:03:07.013  ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39
00:03:07.013  ++ HOME_URL=https://fedoraproject.org/
00:03:07.013  ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/
00:03:07.013  ++ SUPPORT_URL=https://ask.fedoraproject.org/
00:03:07.013  ++ BUG_REPORT_URL=https://bugzilla.redhat.com/
00:03:07.013  ++ REDHAT_BUGZILLA_PRODUCT=Fedora
00:03:07.013  ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39
00:03:07.013  ++ REDHAT_SUPPORT_PRODUCT=Fedora
00:03:07.013  ++ REDHAT_SUPPORT_PRODUCT_VERSION=39
00:03:07.013  ++ SUPPORT_END=2024-11-12
00:03:07.013  ++ VARIANT='Cloud Edition'
00:03:07.013  ++ VARIANT_ID=cloud
00:03:07.013  + uname -a
00:03:07.013  Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux
00:03:07.013  + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:03:07.271  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:03:07.271  Hugepages
00:03:07.271  node     hugesize     free /  total
00:03:07.271  node0   1048576kB        0 /      0
00:03:07.530  node0      2048kB        0 /      0
00:03:07.530  
00:03:07.530  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:03:07.530  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:03:07.530  NVMe                      0000:00:10.0    1b36   0010   unknown nvme             nvme0      nvme0n1
00:03:07.530  NVMe                      0000:00:11.0    1b36   0010   unknown nvme             nvme1      nvme1n1 nvme1n2 nvme1n3
00:03:07.530  + rm -f /tmp/spdk-ld-path
00:03:07.530  + source autorun-spdk.conf
00:03:07.530  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:03:07.530  ++ SPDK_TEST_NVMF=1
00:03:07.530  ++ SPDK_TEST_NVMF_TRANSPORT=tcp
00:03:07.530  ++ SPDK_TEST_VFIOUSER=1
00:03:07.530  ++ SPDK_TEST_USDT=1
00:03:07.530  ++ SPDK_RUN_UBSAN=1
00:03:07.530  ++ SPDK_TEST_NVMF_MDNS=1
00:03:07.530  ++ NET_TYPE=virt
00:03:07.530  ++ SPDK_JSONRPC_GO_CLIENT=1
00:03:07.530  ++ SPDK_TEST_NATIVE_DPDK=v23.11
00:03:07.530  ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build
00:03:07.530  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:03:07.530  ++ RUN_NIGHTLY=1
00:03:07.530  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:03:07.530  + [[ -n '' ]]
00:03:07.530  + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk
00:03:07.530  + for M in /var/spdk/build-*-manifest.txt
00:03:07.530  + [[ -f /var/spdk/build-kernel-manifest.txt ]]
00:03:07.530  + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/
00:03:07.530  + for M in /var/spdk/build-*-manifest.txt
00:03:07.530  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:03:07.530  + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/
00:03:07.530  + for M in /var/spdk/build-*-manifest.txt
00:03:07.530  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:03:07.530  + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/
00:03:07.530  ++ uname
00:03:07.530  + [[ Linux == \L\i\n\u\x ]]
00:03:07.530  + sudo dmesg -T
00:03:07.530  + sudo dmesg --clear
00:03:07.530  + dmesg_pid=5988
00:03:07.530  + sudo dmesg -Tw
00:03:07.530  + [[ Fedora Linux == FreeBSD ]]
00:03:07.530  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:03:07.530  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:03:07.530  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:03:07.530  + [[ -x /usr/src/fio-static/fio ]]
00:03:07.530  + export FIO_BIN=/usr/src/fio-static/fio
00:03:07.530  + FIO_BIN=/usr/src/fio-static/fio
00:03:07.530  + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]]
00:03:07.530  + [[ ! -v VFIO_QEMU_BIN ]]
00:03:07.530  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:03:07.530  + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:03:07.530  + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:03:07.530  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:03:07.530  + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:03:07.530  + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:03:07.530  + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:03:07.789    18:47:39  -- common/autotest_common.sh@1710 -- $ [[ n == y ]]
00:03:07.789   18:47:39  -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf
00:03:07.789    18:47:39  -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1
00:03:07.789    18:47:39  -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1
00:03:07.789    18:47:39  -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp
00:03:07.789    18:47:39  -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_VFIOUSER=1
00:03:07.789    18:47:39  -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1
00:03:07.789    18:47:39  -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1
00:03:07.789    18:47:39  -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_NVMF_MDNS=1
00:03:07.789    18:47:39  -- spdk_repo/autorun-spdk.conf@8 -- $ NET_TYPE=virt
00:03:07.789    18:47:39  -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_JSONRPC_GO_CLIENT=1
00:03:07.789    18:47:39  -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_TEST_NATIVE_DPDK=v23.11
00:03:07.789    18:47:39  -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build
00:03:07.789    18:47:39  -- spdk_repo/autorun-spdk.conf@12 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:03:07.789    18:47:39  -- spdk_repo/autorun-spdk.conf@13 -- $ RUN_NIGHTLY=1
00:03:07.789   18:47:39  -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT
00:03:07.789   18:47:39  -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:03:07.789     18:47:39  -- common/autotest_common.sh@1710 -- $ [[ n == y ]]
00:03:07.789    18:47:39  -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:03:07.789     18:47:39  -- scripts/common.sh@15 -- $ shopt -s extglob
00:03:07.789     18:47:39  -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]]
00:03:07.789     18:47:39  -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:03:07.789     18:47:39  -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:03:07.789      18:47:39  -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:07.789      18:47:39  -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:07.789      18:47:39  -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:07.789      18:47:39  -- paths/export.sh@5 -- $ export PATH
00:03:07.789      18:47:39  -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:07.789    18:47:39  -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output
00:03:07.789      18:47:39  -- common/autobuild_common.sh@493 -- $ date +%s
00:03:07.789     18:47:39  -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734115659.XXXXXX
00:03:07.789    18:47:39  -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734115659.vRIQDT
00:03:07.789    18:47:39  -- common/autobuild_common.sh@495 -- $ [[ -n '' ]]
00:03:07.789    18:47:39  -- common/autobuild_common.sh@499 -- $ '[' -n v23.11 ']'
00:03:07.789     18:47:39  -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build
00:03:07.789    18:47:39  -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk'
00:03:07.789    18:47:39  -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp'
00:03:07.789    18:47:39  -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp  --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs'
00:03:07.789     18:47:39  -- common/autobuild_common.sh@509 -- $ get_config_params
00:03:07.789     18:47:39  -- common/autotest_common.sh@409 -- $ xtrace_disable
00:03:07.789     18:47:39  -- common/autotest_common.sh@10 -- $ set +x
00:03:07.789    18:47:39  -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang'
00:03:07.789    18:47:39  -- common/autobuild_common.sh@511 -- $ start_monitor_resources
00:03:07.789    18:47:39  -- pm/common@17 -- $ local monitor
00:03:07.789    18:47:39  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:03:07.789    18:47:39  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:03:07.789    18:47:39  -- pm/common@25 -- $ sleep 1
00:03:07.789     18:47:39  -- pm/common@21 -- $ date +%s
00:03:07.789     18:47:39  -- pm/common@21 -- $ date +%s
00:03:07.789    18:47:39  -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734115659
00:03:07.789    18:47:39  -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734115659
00:03:07.789  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734115659_collect-cpu-load.pm.log
00:03:07.789  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734115659_collect-vmstat.pm.log
00:03:08.723    18:47:40  -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT
00:03:08.723   18:47:40  -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:03:08.723   18:47:40  -- spdk/autobuild.sh@12 -- $ umask 022
00:03:08.723   18:47:40  -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk
00:03:08.724   18:47:40  -- spdk/autobuild.sh@16 -- $ date -u
00:03:08.724  Fri Dec 13 06:47:40 PM UTC 2024
00:03:08.724   18:47:40  -- spdk/autobuild.sh@17 -- $ git describe --tags
00:03:08.724  v25.01-rc1-2-ge01cb43b8
00:03:08.724   18:47:40  -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']'
00:03:08.724   18:47:40  -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']'
00:03:08.724   18:47:40  -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan'
00:03:08.724   18:47:40  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:03:08.724   18:47:40  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:03:08.724   18:47:40  -- common/autotest_common.sh@10 -- $ set +x
00:03:08.724  ************************************
00:03:08.724  START TEST ubsan
00:03:08.724  ************************************
00:03:08.724  using ubsan
00:03:08.724   18:47:40 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan'
00:03:08.724  
00:03:08.724  real	0m0.000s
00:03:08.724  user	0m0.000s
00:03:08.724  sys	0m0.000s
00:03:08.724   18:47:40 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:03:08.724   18:47:40 ubsan -- common/autotest_common.sh@10 -- $ set +x
00:03:08.724  ************************************
00:03:08.724  END TEST ubsan
00:03:08.724  ************************************
00:03:08.983   18:47:40  -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']'
00:03:08.983   18:47:40  -- spdk/autobuild.sh@28 -- $ build_native_dpdk
00:03:08.983   18:47:40  -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk
00:03:08.983   18:47:40  -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']'
00:03:08.983   18:47:40  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:03:08.983   18:47:40  -- common/autotest_common.sh@10 -- $ set +x
00:03:08.983  ************************************
00:03:08.983  START TEST build_native_dpdk
00:03:08.983  ************************************
00:03:08.983   18:47:40 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]]
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]]
00:03:08.983    18:47:40 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build
00:03:08.983    18:47:40 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]]
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5
00:03:08.983  eeb0605f11 version: 23.11.0
00:03:08.983  238778122a doc: update release notes for 23.11
00:03:08.983  46aa6b3cfc doc: fix description of RSS features
00:03:08.983  dd88f51a57 devtools: forbid DPDK API in cnxk base driver
00:03:08.983  7e421ae345 devtools: support skipping forbid rule check
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon'
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags=
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]]
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]]
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror'
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]]
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]]
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow'
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm")
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]]
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]]
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]]
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk
00:03:08.983    18:47:40 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']'
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 21.11.0
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-:
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-:
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<'
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@345 -- $ : 1
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 ))
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:08.983    18:47:40 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23
00:03:08.983    18:47:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23
00:03:08.983    18:47:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]]
00:03:08.983    18:47:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23
00:03:08.983    18:47:40 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21
00:03:08.983    18:47:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21
00:03:08.983    18:47:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]]
00:03:08.983    18:47:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] ))
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@367 -- $ return 1
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1
00:03:08.983  patching file config/rte_config.h
00:03:08.983  Hunk #1 succeeded at 60 (offset 1 line).
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 23.11.0 24.07.0
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-:
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-:
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<'
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@345 -- $ : 1
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 ))
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:08.983    18:47:40 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23
00:03:08.983    18:47:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23
00:03:08.983    18:47:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]]
00:03:08.983    18:47:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23
00:03:08.983    18:47:40 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24
00:03:08.983    18:47:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24
00:03:08.983    18:47:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]]
00:03:08.983    18:47:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] ))
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] ))
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@368 -- $ return 0
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1
00:03:08.983  patching file lib/pcapng/rte_pcapng.c
00:03:08.983   18:47:40 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 23.11.0 24.07.0
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-:
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-:
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>='
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3
00:03:08.983   18:47:40 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v
00:03:08.984   18:47:40 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in
00:03:08.984   18:47:40 build_native_dpdk -- scripts/common.sh@348 -- $ : 1
00:03:08.984   18:47:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 ))
00:03:08.984   18:47:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:08.984    18:47:40 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23
00:03:08.984    18:47:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23
00:03:08.984    18:47:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]]
00:03:08.984    18:47:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23
00:03:08.984   18:47:40 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23
00:03:08.984    18:47:40 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24
00:03:08.984    18:47:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24
00:03:08.984    18:47:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]]
00:03:08.984    18:47:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24
00:03:08.984   18:47:40 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24
00:03:08.984   18:47:40 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] ))
00:03:08.984   18:47:40 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] ))
00:03:08.984   18:47:40 build_native_dpdk -- scripts/common.sh@368 -- $ return 1
00:03:08.984   18:47:40 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false
00:03:08.984    18:47:40 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s
00:03:08.984   18:47:40 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']'
00:03:08.984    18:47:40 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm
00:03:08.984   18:47:40 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm,
00:03:14.253  The Meson build system
00:03:14.253  Version: 1.5.0
00:03:14.253  Source dir: /home/vagrant/spdk_repo/dpdk
00:03:14.253  Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp
00:03:14.253  Build type: native build
00:03:14.253  Program cat found: YES (/usr/bin/cat)
00:03:14.253  Project name: DPDK
00:03:14.253  Project version: 23.11.0
00:03:14.253  C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:03:14.253  C linker for the host machine: gcc ld.bfd 2.40-14
00:03:14.253  Host machine cpu family: x86_64
00:03:14.253  Host machine cpu: x86_64
00:03:14.253  Message: ## Building in Developer Mode ##
00:03:14.253  Program pkg-config found: YES (/usr/bin/pkg-config)
00:03:14.253  Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh)
00:03:14.253  Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh)
00:03:14.253  Program python3 found: YES (/usr/bin/python3)
00:03:14.253  Program cat found: YES (/usr/bin/cat)
00:03:14.253  config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead.
00:03:14.253  Compiler for C supports arguments -march=native: YES 
00:03:14.253  Checking for size of "void *" : 8 
00:03:14.253  Checking for size of "void *" : 8 (cached)
00:03:14.253  Library m found: YES
00:03:14.253  Library numa found: YES
00:03:14.253  Has header "numaif.h" : YES 
00:03:14.253  Library fdt found: NO
00:03:14.253  Library execinfo found: NO
00:03:14.253  Has header "execinfo.h" : YES 
00:03:14.253  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:03:14.253  Run-time dependency libarchive found: NO (tried pkgconfig)
00:03:14.253  Run-time dependency libbsd found: NO (tried pkgconfig)
00:03:14.253  Run-time dependency jansson found: NO (tried pkgconfig)
00:03:14.253  Run-time dependency openssl found: YES 3.1.1
00:03:14.253  Run-time dependency libpcap found: YES 1.10.4
00:03:14.253  Has header "pcap.h" with dependency libpcap: YES 
00:03:14.253  Compiler for C supports arguments -Wcast-qual: YES 
00:03:14.253  Compiler for C supports arguments -Wdeprecated: YES 
00:03:14.253  Compiler for C supports arguments -Wformat: YES 
00:03:14.253  Compiler for C supports arguments -Wformat-nonliteral: NO 
00:03:14.253  Compiler for C supports arguments -Wformat-security: NO 
00:03:14.253  Compiler for C supports arguments -Wmissing-declarations: YES 
00:03:14.253  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:03:14.253  Compiler for C supports arguments -Wnested-externs: YES 
00:03:14.253  Compiler for C supports arguments -Wold-style-definition: YES 
00:03:14.253  Compiler for C supports arguments -Wpointer-arith: YES 
00:03:14.253  Compiler for C supports arguments -Wsign-compare: YES 
00:03:14.253  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:03:14.253  Compiler for C supports arguments -Wundef: YES 
00:03:14.253  Compiler for C supports arguments -Wwrite-strings: YES 
00:03:14.253  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:03:14.253  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:03:14.253  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:03:14.253  Compiler for C supports arguments -Wno-zero-length-bounds: YES 
00:03:14.253  Program objdump found: YES (/usr/bin/objdump)
00:03:14.253  Compiler for C supports arguments -mavx512f: YES 
00:03:14.253  Checking if "AVX512 checking" compiles: YES 
00:03:14.253  Fetching value of define "__SSE4_2__" : 1 
00:03:14.253  Fetching value of define "__AES__" : 1 
00:03:14.253  Fetching value of define "__AVX__" : 1 
00:03:14.253  Fetching value of define "__AVX2__" : 1 
00:03:14.253  Fetching value of define "__AVX512BW__" : (undefined) 
00:03:14.253  Fetching value of define "__AVX512CD__" : (undefined) 
00:03:14.253  Fetching value of define "__AVX512DQ__" : (undefined) 
00:03:14.253  Fetching value of define "__AVX512F__" : (undefined) 
00:03:14.253  Fetching value of define "__AVX512VL__" : (undefined) 
00:03:14.253  Fetching value of define "__PCLMUL__" : 1 
00:03:14.253  Fetching value of define "__RDRND__" : 1 
00:03:14.253  Fetching value of define "__RDSEED__" : 1 
00:03:14.253  Fetching value of define "__VPCLMULQDQ__" : (undefined) 
00:03:14.253  Fetching value of define "__znver1__" : (undefined) 
00:03:14.253  Fetching value of define "__znver2__" : (undefined) 
00:03:14.253  Fetching value of define "__znver3__" : (undefined) 
00:03:14.253  Fetching value of define "__znver4__" : (undefined) 
00:03:14.253  Compiler for C supports arguments -Wno-format-truncation: YES 
00:03:14.253  Message: lib/log: Defining dependency "log"
00:03:14.253  Message: lib/kvargs: Defining dependency "kvargs"
00:03:14.253  Message: lib/telemetry: Defining dependency "telemetry"
00:03:14.253  Checking for function "getentropy" : NO 
00:03:14.253  Message: lib/eal: Defining dependency "eal"
00:03:14.253  Message: lib/ring: Defining dependency "ring"
00:03:14.253  Message: lib/rcu: Defining dependency "rcu"
00:03:14.253  Message: lib/mempool: Defining dependency "mempool"
00:03:14.253  Message: lib/mbuf: Defining dependency "mbuf"
00:03:14.253  Fetching value of define "__PCLMUL__" : 1 (cached)
00:03:14.253  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:03:14.253  Compiler for C supports arguments -mpclmul: YES 
00:03:14.253  Compiler for C supports arguments -maes: YES 
00:03:14.253  Compiler for C supports arguments -mavx512f: YES (cached)
00:03:14.253  Compiler for C supports arguments -mavx512bw: YES 
00:03:14.253  Compiler for C supports arguments -mavx512dq: YES 
00:03:14.253  Compiler for C supports arguments -mavx512vl: YES 
00:03:14.253  Compiler for C supports arguments -mvpclmulqdq: YES 
00:03:14.253  Compiler for C supports arguments -mavx2: YES 
00:03:14.253  Compiler for C supports arguments -mavx: YES 
00:03:14.253  Message: lib/net: Defining dependency "net"
00:03:14.253  Message: lib/meter: Defining dependency "meter"
00:03:14.253  Message: lib/ethdev: Defining dependency "ethdev"
00:03:14.253  Message: lib/pci: Defining dependency "pci"
00:03:14.253  Message: lib/cmdline: Defining dependency "cmdline"
00:03:14.253  Message: lib/metrics: Defining dependency "metrics"
00:03:14.253  Message: lib/hash: Defining dependency "hash"
00:03:14.253  Message: lib/timer: Defining dependency "timer"
00:03:14.253  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:03:14.253  Fetching value of define "__AVX512VL__" : (undefined) (cached)
00:03:14.253  Fetching value of define "__AVX512CD__" : (undefined) (cached)
00:03:14.253  Fetching value of define "__AVX512BW__" : (undefined) (cached)
00:03:14.253  Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 
00:03:14.253  Message: lib/acl: Defining dependency "acl"
00:03:14.253  Message: lib/bbdev: Defining dependency "bbdev"
00:03:14.253  Message: lib/bitratestats: Defining dependency "bitratestats"
00:03:14.253  Run-time dependency libelf found: YES 0.191
00:03:14.253  Message: lib/bpf: Defining dependency "bpf"
00:03:14.253  Message: lib/cfgfile: Defining dependency "cfgfile"
00:03:14.253  Message: lib/compressdev: Defining dependency "compressdev"
00:03:14.253  Message: lib/cryptodev: Defining dependency "cryptodev"
00:03:14.253  Message: lib/distributor: Defining dependency "distributor"
00:03:14.253  Message: lib/dmadev: Defining dependency "dmadev"
00:03:14.253  Message: lib/efd: Defining dependency "efd"
00:03:14.253  Message: lib/eventdev: Defining dependency "eventdev"
00:03:14.254  Message: lib/dispatcher: Defining dependency "dispatcher"
00:03:14.254  Message: lib/gpudev: Defining dependency "gpudev"
00:03:14.254  Message: lib/gro: Defining dependency "gro"
00:03:14.254  Message: lib/gso: Defining dependency "gso"
00:03:14.254  Message: lib/ip_frag: Defining dependency "ip_frag"
00:03:14.254  Message: lib/jobstats: Defining dependency "jobstats"
00:03:14.254  Message: lib/latencystats: Defining dependency "latencystats"
00:03:14.254  Message: lib/lpm: Defining dependency "lpm"
00:03:14.254  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:03:14.254  Fetching value of define "__AVX512DQ__" : (undefined) (cached)
00:03:14.254  Fetching value of define "__AVX512IFMA__" : (undefined) 
00:03:14.254  Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 
00:03:14.254  Message: lib/member: Defining dependency "member"
00:03:14.254  Message: lib/pcapng: Defining dependency "pcapng"
00:03:14.254  Compiler for C supports arguments -Wno-cast-qual: YES 
00:03:14.254  Message: lib/power: Defining dependency "power"
00:03:14.254  Message: lib/rawdev: Defining dependency "rawdev"
00:03:14.254  Message: lib/regexdev: Defining dependency "regexdev"
00:03:14.254  Message: lib/mldev: Defining dependency "mldev"
00:03:14.254  Message: lib/rib: Defining dependency "rib"
00:03:14.254  Message: lib/reorder: Defining dependency "reorder"
00:03:14.254  Message: lib/sched: Defining dependency "sched"
00:03:14.254  Message: lib/security: Defining dependency "security"
00:03:14.254  Message: lib/stack: Defining dependency "stack"
00:03:14.254  Has header "linux/userfaultfd.h" : YES 
00:03:14.254  Has header "linux/vduse.h" : YES 
00:03:14.254  Message: lib/vhost: Defining dependency "vhost"
00:03:14.254  Message: lib/ipsec: Defining dependency "ipsec"
00:03:14.254  Message: lib/pdcp: Defining dependency "pdcp"
00:03:14.254  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:03:14.254  Fetching value of define "__AVX512DQ__" : (undefined) (cached)
00:03:14.254  Compiler for C supports arguments -mavx512f -mavx512dq: YES 
00:03:14.254  Compiler for C supports arguments -mavx512bw: YES (cached)
00:03:14.254  Message: lib/fib: Defining dependency "fib"
00:03:14.254  Message: lib/port: Defining dependency "port"
00:03:14.254  Message: lib/pdump: Defining dependency "pdump"
00:03:14.254  Message: lib/table: Defining dependency "table"
00:03:14.254  Message: lib/pipeline: Defining dependency "pipeline"
00:03:14.254  Message: lib/graph: Defining dependency "graph"
00:03:14.254  Message: lib/node: Defining dependency "node"
00:03:14.254  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:03:16.157  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:03:16.157  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:03:16.157  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:03:16.157  Compiler for C supports arguments -Wno-sign-compare: YES 
00:03:16.157  Compiler for C supports arguments -Wno-unused-value: YES 
00:03:16.157  Compiler for C supports arguments -Wno-format: YES 
00:03:16.157  Compiler for C supports arguments -Wno-format-security: YES 
00:03:16.157  Compiler for C supports arguments -Wno-format-nonliteral: YES 
00:03:16.157  Compiler for C supports arguments -Wno-strict-aliasing: YES 
00:03:16.157  Compiler for C supports arguments -Wno-unused-but-set-variable: YES 
00:03:16.157  Compiler for C supports arguments -Wno-unused-parameter: YES 
00:03:16.157  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:03:16.157  Compiler for C supports arguments -mavx512f: YES (cached)
00:03:16.157  Compiler for C supports arguments -mavx512bw: YES (cached)
00:03:16.157  Compiler for C supports arguments -march=skylake-avx512: YES 
00:03:16.157  Message: drivers/net/i40e: Defining dependency "net_i40e"
00:03:16.157  Has header "sys/epoll.h" : YES 
00:03:16.157  Program doxygen found: YES (/usr/local/bin/doxygen)
00:03:16.157  Configuring doxy-api-html.conf using configuration
00:03:16.157  Configuring doxy-api-man.conf using configuration
00:03:16.157  Program mandb found: YES (/usr/bin/mandb)
00:03:16.157  Program sphinx-build found: NO
00:03:16.157  Configuring rte_build_config.h using configuration
00:03:16.157  Message: 
00:03:16.157  =================
00:03:16.157  Applications Enabled
00:03:16.157  =================
00:03:16.157  
00:03:16.157  apps:
00:03:16.157  	dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 
00:03:16.157  	test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 
00:03:16.157  	test-pmd, test-regex, test-sad, test-security-perf, 
00:03:16.157  
00:03:16.157  Message: 
00:03:16.157  =================
00:03:16.157  Libraries Enabled
00:03:16.157  =================
00:03:16.157  
00:03:16.157  libs:
00:03:16.157  	log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 
00:03:16.157  	net, meter, ethdev, pci, cmdline, metrics, hash, timer, 
00:03:16.157  	acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 
00:03:16.157  	dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 
00:03:16.157  	jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 
00:03:16.157  	mldev, rib, reorder, sched, security, stack, vhost, ipsec, 
00:03:16.157  	pdcp, fib, port, pdump, table, pipeline, graph, node, 
00:03:16.157  	
00:03:16.157  
00:03:16.157  Message: 
00:03:16.157  ===============
00:03:16.157  Drivers Enabled
00:03:16.157  ===============
00:03:16.157  
00:03:16.157  common:
00:03:16.157  	
00:03:16.157  bus:
00:03:16.157  	pci, vdev, 
00:03:16.157  mempool:
00:03:16.157  	ring, 
00:03:16.157  dma:
00:03:16.157  	
00:03:16.157  net:
00:03:16.157  	i40e, 
00:03:16.157  raw:
00:03:16.157  	
00:03:16.157  crypto:
00:03:16.157  	
00:03:16.157  compress:
00:03:16.157  	
00:03:16.157  regex:
00:03:16.157  	
00:03:16.157  ml:
00:03:16.157  	
00:03:16.157  vdpa:
00:03:16.157  	
00:03:16.157  event:
00:03:16.157  	
00:03:16.157  baseband:
00:03:16.157  	
00:03:16.157  gpu:
00:03:16.157  	
00:03:16.157  
00:03:16.157  Message: 
00:03:16.157  =================
00:03:16.157  Content Skipped
00:03:16.157  =================
00:03:16.157  
00:03:16.157  apps:
00:03:16.157  	
00:03:16.157  libs:
00:03:16.157  	
00:03:16.157  drivers:
00:03:16.157  	common/cpt:	not in enabled drivers build config
00:03:16.157  	common/dpaax:	not in enabled drivers build config
00:03:16.157  	common/iavf:	not in enabled drivers build config
00:03:16.157  	common/idpf:	not in enabled drivers build config
00:03:16.157  	common/mvep:	not in enabled drivers build config
00:03:16.157  	common/octeontx:	not in enabled drivers build config
00:03:16.157  	bus/auxiliary:	not in enabled drivers build config
00:03:16.157  	bus/cdx:	not in enabled drivers build config
00:03:16.157  	bus/dpaa:	not in enabled drivers build config
00:03:16.157  	bus/fslmc:	not in enabled drivers build config
00:03:16.157  	bus/ifpga:	not in enabled drivers build config
00:03:16.157  	bus/platform:	not in enabled drivers build config
00:03:16.157  	bus/vmbus:	not in enabled drivers build config
00:03:16.157  	common/cnxk:	not in enabled drivers build config
00:03:16.157  	common/mlx5:	not in enabled drivers build config
00:03:16.157  	common/nfp:	not in enabled drivers build config
00:03:16.157  	common/qat:	not in enabled drivers build config
00:03:16.157  	common/sfc_efx:	not in enabled drivers build config
00:03:16.157  	mempool/bucket:	not in enabled drivers build config
00:03:16.157  	mempool/cnxk:	not in enabled drivers build config
00:03:16.157  	mempool/dpaa:	not in enabled drivers build config
00:03:16.157  	mempool/dpaa2:	not in enabled drivers build config
00:03:16.157  	mempool/octeontx:	not in enabled drivers build config
00:03:16.157  	mempool/stack:	not in enabled drivers build config
00:03:16.157  	dma/cnxk:	not in enabled drivers build config
00:03:16.157  	dma/dpaa:	not in enabled drivers build config
00:03:16.157  	dma/dpaa2:	not in enabled drivers build config
00:03:16.157  	dma/hisilicon:	not in enabled drivers build config
00:03:16.157  	dma/idxd:	not in enabled drivers build config
00:03:16.157  	dma/ioat:	not in enabled drivers build config
00:03:16.157  	dma/skeleton:	not in enabled drivers build config
00:03:16.157  	net/af_packet:	not in enabled drivers build config
00:03:16.157  	net/af_xdp:	not in enabled drivers build config
00:03:16.157  	net/ark:	not in enabled drivers build config
00:03:16.157  	net/atlantic:	not in enabled drivers build config
00:03:16.157  	net/avp:	not in enabled drivers build config
00:03:16.157  	net/axgbe:	not in enabled drivers build config
00:03:16.157  	net/bnx2x:	not in enabled drivers build config
00:03:16.157  	net/bnxt:	not in enabled drivers build config
00:03:16.158  	net/bonding:	not in enabled drivers build config
00:03:16.158  	net/cnxk:	not in enabled drivers build config
00:03:16.158  	net/cpfl:	not in enabled drivers build config
00:03:16.158  	net/cxgbe:	not in enabled drivers build config
00:03:16.158  	net/dpaa:	not in enabled drivers build config
00:03:16.158  	net/dpaa2:	not in enabled drivers build config
00:03:16.158  	net/e1000:	not in enabled drivers build config
00:03:16.158  	net/ena:	not in enabled drivers build config
00:03:16.158  	net/enetc:	not in enabled drivers build config
00:03:16.158  	net/enetfec:	not in enabled drivers build config
00:03:16.158  	net/enic:	not in enabled drivers build config
00:03:16.158  	net/failsafe:	not in enabled drivers build config
00:03:16.158  	net/fm10k:	not in enabled drivers build config
00:03:16.158  	net/gve:	not in enabled drivers build config
00:03:16.158  	net/hinic:	not in enabled drivers build config
00:03:16.158  	net/hns3:	not in enabled drivers build config
00:03:16.158  	net/iavf:	not in enabled drivers build config
00:03:16.158  	net/ice:	not in enabled drivers build config
00:03:16.158  	net/idpf:	not in enabled drivers build config
00:03:16.158  	net/igc:	not in enabled drivers build config
00:03:16.158  	net/ionic:	not in enabled drivers build config
00:03:16.158  	net/ipn3ke:	not in enabled drivers build config
00:03:16.158  	net/ixgbe:	not in enabled drivers build config
00:03:16.158  	net/mana:	not in enabled drivers build config
00:03:16.158  	net/memif:	not in enabled drivers build config
00:03:16.158  	net/mlx4:	not in enabled drivers build config
00:03:16.158  	net/mlx5:	not in enabled drivers build config
00:03:16.158  	net/mvneta:	not in enabled drivers build config
00:03:16.158  	net/mvpp2:	not in enabled drivers build config
00:03:16.158  	net/netvsc:	not in enabled drivers build config
00:03:16.158  	net/nfb:	not in enabled drivers build config
00:03:16.158  	net/nfp:	not in enabled drivers build config
00:03:16.158  	net/ngbe:	not in enabled drivers build config
00:03:16.158  	net/null:	not in enabled drivers build config
00:03:16.158  	net/octeontx:	not in enabled drivers build config
00:03:16.158  	net/octeon_ep:	not in enabled drivers build config
00:03:16.158  	net/pcap:	not in enabled drivers build config
00:03:16.158  	net/pfe:	not in enabled drivers build config
00:03:16.158  	net/qede:	not in enabled drivers build config
00:03:16.158  	net/ring:	not in enabled drivers build config
00:03:16.158  	net/sfc:	not in enabled drivers build config
00:03:16.158  	net/softnic:	not in enabled drivers build config
00:03:16.158  	net/tap:	not in enabled drivers build config
00:03:16.158  	net/thunderx:	not in enabled drivers build config
00:03:16.158  	net/txgbe:	not in enabled drivers build config
00:03:16.158  	net/vdev_netvsc:	not in enabled drivers build config
00:03:16.158  	net/vhost:	not in enabled drivers build config
00:03:16.158  	net/virtio:	not in enabled drivers build config
00:03:16.158  	net/vmxnet3:	not in enabled drivers build config
00:03:16.158  	raw/cnxk_bphy:	not in enabled drivers build config
00:03:16.158  	raw/cnxk_gpio:	not in enabled drivers build config
00:03:16.158  	raw/dpaa2_cmdif:	not in enabled drivers build config
00:03:16.158  	raw/ifpga:	not in enabled drivers build config
00:03:16.158  	raw/ntb:	not in enabled drivers build config
00:03:16.158  	raw/skeleton:	not in enabled drivers build config
00:03:16.158  	crypto/armv8:	not in enabled drivers build config
00:03:16.158  	crypto/bcmfs:	not in enabled drivers build config
00:03:16.158  	crypto/caam_jr:	not in enabled drivers build config
00:03:16.158  	crypto/ccp:	not in enabled drivers build config
00:03:16.158  	crypto/cnxk:	not in enabled drivers build config
00:03:16.158  	crypto/dpaa_sec:	not in enabled drivers build config
00:03:16.158  	crypto/dpaa2_sec:	not in enabled drivers build config
00:03:16.158  	crypto/ipsec_mb:	not in enabled drivers build config
00:03:16.158  	crypto/mlx5:	not in enabled drivers build config
00:03:16.158  	crypto/mvsam:	not in enabled drivers build config
00:03:16.158  	crypto/nitrox:	not in enabled drivers build config
00:03:16.158  	crypto/null:	not in enabled drivers build config
00:03:16.158  	crypto/octeontx:	not in enabled drivers build config
00:03:16.158  	crypto/openssl:	not in enabled drivers build config
00:03:16.158  	crypto/scheduler:	not in enabled drivers build config
00:03:16.158  	crypto/uadk:	not in enabled drivers build config
00:03:16.158  	crypto/virtio:	not in enabled drivers build config
00:03:16.158  	compress/isal:	not in enabled drivers build config
00:03:16.158  	compress/mlx5:	not in enabled drivers build config
00:03:16.158  	compress/octeontx:	not in enabled drivers build config
00:03:16.158  	compress/zlib:	not in enabled drivers build config
00:03:16.158  	regex/mlx5:	not in enabled drivers build config
00:03:16.158  	regex/cn9k:	not in enabled drivers build config
00:03:16.158  	ml/cnxk:	not in enabled drivers build config
00:03:16.158  	vdpa/ifc:	not in enabled drivers build config
00:03:16.158  	vdpa/mlx5:	not in enabled drivers build config
00:03:16.158  	vdpa/nfp:	not in enabled drivers build config
00:03:16.158  	vdpa/sfc:	not in enabled drivers build config
00:03:16.158  	event/cnxk:	not in enabled drivers build config
00:03:16.158  	event/dlb2:	not in enabled drivers build config
00:03:16.158  	event/dpaa:	not in enabled drivers build config
00:03:16.158  	event/dpaa2:	not in enabled drivers build config
00:03:16.158  	event/dsw:	not in enabled drivers build config
00:03:16.158  	event/opdl:	not in enabled drivers build config
00:03:16.158  	event/skeleton:	not in enabled drivers build config
00:03:16.158  	event/sw:	not in enabled drivers build config
00:03:16.158  	event/octeontx:	not in enabled drivers build config
00:03:16.158  	baseband/acc:	not in enabled drivers build config
00:03:16.158  	baseband/fpga_5gnr_fec:	not in enabled drivers build config
00:03:16.158  	baseband/fpga_lte_fec:	not in enabled drivers build config
00:03:16.158  	baseband/la12xx:	not in enabled drivers build config
00:03:16.158  	baseband/null:	not in enabled drivers build config
00:03:16.158  	baseband/turbo_sw:	not in enabled drivers build config
00:03:16.158  	gpu/cuda:	not in enabled drivers build config
00:03:16.158  	
00:03:16.158  
00:03:16.158  Build targets in project: 220
00:03:16.158  
00:03:16.158  DPDK 23.11.0
00:03:16.158  
00:03:16.158    User defined options
00:03:16.158      libdir        : lib
00:03:16.158      prefix        : /home/vagrant/spdk_repo/dpdk/build
00:03:16.158      c_args        : -fPIC -g -fcommon -Werror -Wno-stringop-overflow
00:03:16.158      c_link_args   : 
00:03:16.158      enable_docs   : false
00:03:16.158      enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm,
00:03:16.158      enable_kmods  : false
00:03:16.158      machine       : native
00:03:16.158      tests         : false
00:03:16.158  
00:03:16.158  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:03:16.158  WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated.
00:03:16.158   18:47:47 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10
00:03:16.158  ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp'
00:03:16.158  [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:03:16.158  [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:03:16.158  [3/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o
00:03:16.158  [4/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:03:16.158  [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:03:16.158  [6/710] Linking static target lib/librte_kvargs.a
00:03:16.158  [7/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:03:16.158  [8/710] Compiling C object lib/librte_log.a.p/log_log.c.o
00:03:16.158  [9/710] Linking static target lib/librte_log.a
00:03:16.417  [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:03:16.417  [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:03:16.675  [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:03:16.675  [13/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output)
00:03:16.675  [14/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:03:16.675  [15/710] Linking target lib/librte_log.so.24.0
00:03:16.675  [16/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:03:16.934  [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:03:16.934  [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:03:16.934  [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:03:16.934  [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:03:16.934  [21/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols
00:03:17.192  [22/710] Linking target lib/librte_kvargs.so.24.0
00:03:17.192  [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:03:17.192  [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:03:17.193  [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols
00:03:17.193  [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:03:17.451  [27/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:03:17.451  [28/710] Linking static target lib/librte_telemetry.a
00:03:17.451  [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:03:17.451  [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:03:17.451  [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:03:17.710  [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:03:17.710  [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:03:17.710  [34/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:03:17.710  [35/710] Linking target lib/librte_telemetry.so.24.0
00:03:17.710  [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:03:17.710  [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:03:17.969  [38/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:03:17.969  [39/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:03:17.969  [40/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols
00:03:17.969  [41/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:03:17.969  [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:03:17.969  [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:03:17.969  [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:03:18.228  [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:03:18.228  [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:03:18.228  [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:03:18.487  [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:03:18.487  [49/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:03:18.487  [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:03:18.487  [51/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:03:18.487  [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:03:18.487  [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:03:18.746  [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:03:18.746  [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:03:18.746  [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:03:19.004  [57/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:03:19.004  [58/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:03:19.004  [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:03:19.004  [60/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:03:19.004  [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:03:19.004  [62/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:03:19.004  [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:03:19.263  [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:03:19.263  [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:03:19.263  [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:03:19.263  [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:03:19.263  [68/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:03:19.522  [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:03:19.522  [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:03:19.781  [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:03:19.781  [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:03:19.781  [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:03:19.781  [74/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:03:19.781  [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:03:19.781  [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:03:19.781  [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:03:19.781  [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:03:20.039  [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:03:20.039  [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:03:20.298  [81/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:03:20.298  [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:03:20.298  [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:03:20.298  [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:03:20.298  [85/710] Linking static target lib/librte_ring.a
00:03:20.556  [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:03:20.556  [87/710] Linking static target lib/librte_eal.a
00:03:20.556  [88/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:03:20.556  [89/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:03:20.556  [90/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:03:20.815  [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:03:20.815  [92/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:03:20.815  [93/710] Linking static target lib/librte_mempool.a
00:03:20.815  [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:03:20.815  [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:03:21.073  [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:03:21.073  [97/710] Linking static target lib/librte_rcu.a
00:03:21.073  [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o
00:03:21.073  [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a
00:03:21.330  [100/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:03:21.330  [101/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:03:21.330  [102/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:03:21.330  [103/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:03:21.588  [104/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:03:21.588  [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:03:21.588  [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:03:21.588  [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:03:21.588  [108/710] Linking static target lib/librte_mbuf.a
00:03:21.847  [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:03:21.847  [110/710] Linking static target lib/librte_net.a
00:03:22.106  [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:03:22.106  [112/710] Linking static target lib/librte_meter.a
00:03:22.106  [113/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:03:22.106  [114/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:03:22.106  [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:03:22.106  [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:03:22.106  [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:03:22.106  [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:03:22.364  [119/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:03:22.934  [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:03:22.934  [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:03:22.934  [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:03:23.206  [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:03:23.206  [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o
00:03:23.206  [125/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:03:23.206  [126/710] Linking static target lib/librte_pci.a
00:03:23.206  [127/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:03:23.206  [128/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:03:23.477  [129/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:03:23.477  [130/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:03:23.477  [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:03:23.477  [132/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:03:23.736  [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:03:23.736  [134/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:03:23.736  [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:03:23.736  [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:03:23.736  [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:03:23.736  [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:03:23.736  [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:03:23.736  [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:03:23.995  [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:03:23.995  [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:03:23.995  [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:03:23.995  [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:03:23.995  [145/710] Linking static target lib/librte_cmdline.a
00:03:24.253  [146/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o
00:03:24.253  [147/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:03:24.253  [148/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o
00:03:24.512  [149/710] Linking static target lib/librte_metrics.a
00:03:24.512  [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:03:24.771  [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output)
00:03:25.030  [152/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:03:25.030  [153/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:03:25.030  [154/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:03:25.030  [155/710] Linking static target lib/librte_timer.a
00:03:25.289  [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:03:25.547  [157/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o
00:03:25.547  [158/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o
00:03:25.806  [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o
00:03:25.806  [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o
00:03:26.373  [161/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:03:26.373  [162/710] Linking static target lib/librte_ethdev.a
00:03:26.373  [163/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o
00:03:26.373  [164/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o
00:03:26.373  [165/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o
00:03:26.373  [166/710] Linking static target lib/librte_bitratestats.a
00:03:26.631  [167/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:03:26.631  [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o
00:03:26.631  [169/710] Linking static target lib/librte_bbdev.a
00:03:26.631  [170/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:03:26.631  [171/710] Linking static target lib/librte_hash.a
00:03:26.631  [172/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output)
00:03:26.631  [173/710] Linking target lib/librte_eal.so.24.0
00:03:26.890  [174/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols
00:03:26.890  [175/710] Linking target lib/librte_ring.so.24.0
00:03:26.890  [176/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols
00:03:26.890  [177/710] Linking target lib/librte_rcu.so.24.0
00:03:27.148  [178/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o
00:03:27.148  [179/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o
00:03:27.148  [180/710] Linking target lib/librte_mempool.so.24.0
00:03:27.148  [181/710] Linking target lib/librte_meter.so.24.0
00:03:27.148  [182/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols
00:03:27.148  [183/710] Linking target lib/librte_pci.so.24.0
00:03:27.148  [184/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:03:27.148  [185/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols
00:03:27.148  [186/710] Linking target lib/librte_timer.so.24.0
00:03:27.148  [187/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:27.148  [188/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols
00:03:27.148  [189/710] Linking target lib/librte_mbuf.so.24.0
00:03:27.148  [190/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o
00:03:27.148  [191/710] Linking static target lib/acl/libavx2_tmp.a
00:03:27.406  [192/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o
00:03:27.406  [193/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o
00:03:27.406  [194/710] Linking static target lib/acl/libavx512_tmp.a
00:03:27.406  [195/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols
00:03:27.406  [196/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols
00:03:27.406  [197/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols
00:03:27.406  [198/710] Linking target lib/librte_net.so.24.0
00:03:27.665  [199/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols
00:03:27.665  [200/710] Linking target lib/librte_cmdline.so.24.0
00:03:27.665  [201/710] Linking target lib/librte_hash.so.24.0
00:03:27.665  [202/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o
00:03:27.665  [203/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o
00:03:27.665  [204/710] Linking static target lib/librte_acl.a
00:03:27.665  [205/710] Linking target lib/librte_bbdev.so.24.0
00:03:27.665  [206/710] Linking static target lib/librte_cfgfile.a
00:03:27.665  [207/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols
00:03:27.924  [208/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o
00:03:27.924  [209/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output)
00:03:27.924  [210/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o
00:03:27.924  [211/710] Linking target lib/librte_acl.so.24.0
00:03:27.924  [212/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output)
00:03:28.182  [213/710] Linking target lib/librte_cfgfile.so.24.0
00:03:28.182  [214/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o
00:03:28.182  [215/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols
00:03:28.182  [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o
00:03:28.440  [217/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o
00:03:28.440  [218/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:03:28.698  [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:03:28.698  [220/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o
00:03:28.698  [221/710] Linking static target lib/librte_bpf.a
00:03:28.698  [222/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:03:28.698  [223/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:03:28.698  [224/710] Linking static target lib/librte_compressdev.a
00:03:28.956  [225/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output)
00:03:28.956  [226/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:03:28.956  [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o
00:03:29.214  [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o
00:03:29.214  [229/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o
00:03:29.214  [230/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:29.214  [231/710] Linking static target lib/librte_distributor.a
00:03:29.214  [232/710] Linking target lib/librte_compressdev.so.24.0
00:03:29.214  [233/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o
00:03:29.473  [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output)
00:03:29.473  [235/710] Linking target lib/librte_distributor.so.24.0
00:03:29.731  [236/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o
00:03:29.731  [237/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:03:29.731  [238/710] Linking static target lib/librte_dmadev.a
00:03:29.989  [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:29.989  [240/710] Linking target lib/librte_dmadev.so.24.0
00:03:29.989  [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o
00:03:30.247  [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols
00:03:30.247  [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o
00:03:30.505  [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o
00:03:30.505  [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o
00:03:30.505  [246/710] Linking static target lib/librte_efd.a
00:03:30.505  [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:03:30.505  [248/710] Linking static target lib/librte_cryptodev.a
00:03:30.763  [249/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output)
00:03:30.763  [250/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o
00:03:30.763  [251/710] Linking target lib/librte_efd.so.24.0
00:03:31.021  [252/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:31.021  [253/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o
00:03:31.021  [254/710] Linking static target lib/librte_dispatcher.a
00:03:31.021  [255/710] Linking target lib/librte_ethdev.so.24.0
00:03:31.280  [256/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o
00:03:31.280  [257/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols
00:03:31.280  [258/710] Linking target lib/librte_metrics.so.24.0
00:03:31.280  [259/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o
00:03:31.538  [260/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols
00:03:31.538  [261/710] Linking target lib/librte_bpf.so.24.0
00:03:31.538  [262/710] Linking target lib/librte_bitratestats.so.24.0
00:03:31.538  [263/710] Linking static target lib/librte_gpudev.a
00:03:31.538  [264/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output)
00:03:31.538  [265/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o
00:03:31.538  [266/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o
00:03:31.538  [267/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o
00:03:31.538  [268/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols
00:03:31.796  [269/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:31.796  [270/710] Linking target lib/librte_cryptodev.so.24.0
00:03:32.055  [271/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o
00:03:32.055  [272/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols
00:03:32.055  [273/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o
00:03:32.313  [274/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o
00:03:32.313  [275/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:32.313  [276/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o
00:03:32.313  [277/710] Linking target lib/librte_gpudev.so.24.0
00:03:32.313  [278/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o
00:03:32.313  [279/710] Linking static target lib/librte_eventdev.a
00:03:32.313  [280/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o
00:03:32.313  [281/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o
00:03:32.313  [282/710] Linking static target lib/librte_gro.a
00:03:32.571  [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o
00:03:32.571  [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o
00:03:32.571  [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o
00:03:32.571  [286/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output)
00:03:32.571  [287/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o
00:03:32.571  [288/710] Linking target lib/librte_gro.so.24.0
00:03:32.830  [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o
00:03:32.830  [290/710] Linking static target lib/librte_gso.a
00:03:33.088  [291/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output)
00:03:33.088  [292/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o
00:03:33.088  [293/710] Linking target lib/librte_gso.so.24.0
00:03:33.088  [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o
00:03:33.088  [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o
00:03:33.352  [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o
00:03:33.352  [297/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o
00:03:33.352  [298/710] Linking static target lib/librte_jobstats.a
00:03:33.352  [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o
00:03:33.352  [300/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o
00:03:33.352  [301/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o
00:03:33.352  [302/710] Linking static target lib/librte_latencystats.a
00:03:33.352  [303/710] Linking static target lib/librte_ip_frag.a
00:03:33.666  [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output)
00:03:33.666  [305/710] Linking target lib/librte_jobstats.so.24.0
00:03:33.666  [306/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output)
00:03:33.666  [307/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output)
00:03:33.666  [308/710] Linking target lib/librte_latencystats.so.24.0
00:03:33.666  [309/710] Linking target lib/librte_ip_frag.so.24.0
00:03:33.924  [310/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o
00:03:33.924  [311/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o
00:03:33.924  [312/710] Linking static target lib/member/libsketch_avx512_tmp.a
00:03:33.924  [313/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o
00:03:33.924  [314/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols
00:03:33.924  [315/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o
00:03:33.924  [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:03:34.183  [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o
00:03:34.183  [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:34.442  [319/710] Linking target lib/librte_eventdev.so.24.0
00:03:34.442  [320/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o
00:03:34.442  [321/710] Linking static target lib/librte_lpm.a
00:03:34.442  [322/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols
00:03:34.442  [323/710] Linking target lib/librte_dispatcher.so.24.0
00:03:34.442  [324/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o
00:03:34.442  [325/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o
00:03:34.700  [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o
00:03:34.700  [327/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o
00:03:34.700  [328/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o
00:03:34.700  [329/710] Linking static target lib/librte_pcapng.a
00:03:34.700  [330/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output)
00:03:34.700  [331/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o
00:03:34.700  [332/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o
00:03:34.700  [333/710] Linking target lib/librte_lpm.so.24.0
00:03:34.959  [334/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols
00:03:34.959  [335/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output)
00:03:34.959  [336/710] Linking target lib/librte_pcapng.so.24.0
00:03:35.218  [337/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols
00:03:35.218  [338/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o
00:03:35.218  [339/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o
00:03:35.477  [340/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o
00:03:35.477  [341/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o
00:03:35.477  [342/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:03:35.477  [343/710] Linking static target lib/librte_power.a
00:03:35.477  [344/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o
00:03:35.477  [345/710] Linking static target lib/librte_regexdev.a
00:03:35.477  [346/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o
00:03:35.477  [347/710] Linking static target lib/librte_rawdev.a
00:03:35.735  [348/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o
00:03:35.735  [349/710] Linking static target lib/librte_member.a
00:03:35.735  [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o
00:03:35.735  [351/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o
00:03:35.735  [352/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o
00:03:35.994  [353/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o
00:03:35.994  [354/710] Linking static target lib/librte_mldev.a
00:03:35.994  [355/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output)
00:03:35.994  [356/710] Linking target lib/librte_member.so.24.0
00:03:35.994  [357/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:35.994  [358/710] Linking target lib/librte_rawdev.so.24.0
00:03:36.252  [359/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:03:36.252  [360/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o
00:03:36.252  [361/710] Linking target lib/librte_power.so.24.0
00:03:36.252  [362/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o
00:03:36.252  [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:36.252  [364/710] Linking target lib/librte_regexdev.so.24.0
00:03:36.252  [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o
00:03:36.511  [366/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:03:36.511  [367/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o
00:03:36.511  [368/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:03:36.511  [369/710] Linking static target lib/librte_reorder.a
00:03:36.511  [370/710] Linking static target lib/librte_rib.a
00:03:36.511  [371/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o
00:03:36.770  [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o
00:03:36.770  [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o
00:03:36.770  [374/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:03:37.029  [375/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o
00:03:37.029  [376/710] Linking target lib/librte_reorder.so.24.0
00:03:37.029  [377/710] Linking static target lib/librte_stack.a
00:03:37.029  [378/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output)
00:03:37.029  [379/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:03:37.029  [380/710] Linking static target lib/librte_security.a
00:03:37.029  [381/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols
00:03:37.029  [382/710] Linking target lib/librte_rib.so.24.0
00:03:37.029  [383/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output)
00:03:37.029  [384/710] Linking target lib/librte_stack.so.24.0
00:03:37.029  [385/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:37.287  [386/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols
00:03:37.287  [387/710] Linking target lib/librte_mldev.so.24.0
00:03:37.287  [388/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:03:37.287  [389/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:03:37.546  [390/710] Linking target lib/librte_security.so.24.0
00:03:37.546  [391/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:03:37.546  [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols
00:03:37.546  [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:03:37.805  [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o
00:03:37.805  [395/710] Linking static target lib/librte_sched.a
00:03:38.063  [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o
00:03:38.064  [397/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output)
00:03:38.064  [398/710] Linking target lib/librte_sched.so.24.0
00:03:38.064  [399/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o
00:03:38.322  [400/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols
00:03:38.322  [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:03:38.322  [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o
00:03:38.581  [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o
00:03:38.839  [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:03:38.840  [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o
00:03:39.098  [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o
00:03:39.098  [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o
00:03:39.356  [408/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o
00:03:39.356  [409/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o
00:03:39.356  [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o
00:03:39.356  [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o
00:03:39.356  [412/710] Linking static target lib/librte_ipsec.a
00:03:39.615  [413/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o
00:03:39.615  [414/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output)
00:03:39.615  [415/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o
00:03:39.873  [416/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a
00:03:39.873  [417/710] Linking target lib/librte_ipsec.so.24.0
00:03:39.873  [418/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o
00:03:39.873  [419/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols
00:03:39.873  [420/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o
00:03:39.873  [421/710] Linking static target lib/fib/libtrie_avx512_tmp.a
00:03:39.873  [422/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o
00:03:39.873  [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o
00:03:40.808  [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o
00:03:40.808  [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o
00:03:40.808  [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o
00:03:40.808  [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o
00:03:40.808  [428/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o
00:03:41.067  [429/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o
00:03:41.067  [430/710] Linking static target lib/librte_fib.a
00:03:41.067  [431/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o
00:03:41.067  [432/710] Linking static target lib/librte_pdcp.a
00:03:41.326  [433/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output)
00:03:41.326  [434/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output)
00:03:41.326  [435/710] Linking target lib/librte_fib.so.24.0
00:03:41.326  [436/710] Linking target lib/librte_pdcp.so.24.0
00:03:41.326  [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o
00:03:41.893  [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o
00:03:41.893  [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o
00:03:41.893  [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o
00:03:41.893  [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o
00:03:42.152  [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o
00:03:42.152  [443/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o
00:03:42.152  [444/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o
00:03:42.411  [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o
00:03:42.411  [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o
00:03:42.411  [447/710] Linking static target lib/librte_port.a
00:03:42.669  [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o
00:03:42.669  [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o
00:03:42.669  [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o
00:03:42.928  [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o
00:03:42.928  [452/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:03:42.928  [453/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output)
00:03:42.928  [454/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o
00:03:42.928  [455/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o
00:03:42.928  [456/710] Linking static target lib/librte_pdump.a
00:03:42.928  [457/710] Linking target lib/librte_port.so.24.0
00:03:43.187  [458/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o
00:03:43.187  [459/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols
00:03:43.187  [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output)
00:03:43.187  [461/710] Linking target lib/librte_pdump.so.24.0
00:03:43.444  [462/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o
00:03:43.702  [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o
00:03:43.702  [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o
00:03:43.702  [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o
00:03:43.975  [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o
00:03:43.975  [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o
00:03:43.975  [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o
00:03:44.248  [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o
00:03:44.248  [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o
00:03:44.248  [471/710] Linking static target lib/librte_table.a
00:03:44.506  [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o
00:03:44.506  [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o
00:03:44.765  [474/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output)
00:03:45.024  [475/710] Linking target lib/librte_table.so.24.0
00:03:45.024  [476/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o
00:03:45.024  [477/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols
00:03:45.024  [478/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o
00:03:45.283  [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o
00:03:45.283  [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o
00:03:45.542  [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o
00:03:45.800  [482/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o
00:03:45.800  [483/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o
00:03:45.800  [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o
00:03:46.059  [485/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o
00:03:46.059  [486/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o
00:03:46.626  [487/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o
00:03:46.626  [488/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o
00:03:46.626  [489/710] Linking static target lib/librte_graph.a
00:03:46.626  [490/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o
00:03:46.626  [491/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o
00:03:46.626  [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o
00:03:46.885  [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o
00:03:47.144  [494/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output)
00:03:47.144  [495/710] Linking target lib/librte_graph.so.24.0
00:03:47.144  [496/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols
00:03:47.402  [497/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o
00:03:47.402  [498/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o
00:03:47.402  [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o
00:03:47.661  [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o
00:03:47.920  [501/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o
00:03:47.920  [502/710] Compiling C object lib/librte_node.a.p/node_log.c.o
00:03:47.920  [503/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o
00:03:47.920  [504/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o
00:03:47.920  [505/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o
00:03:47.920  [506/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:03:48.178  [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o
00:03:48.437  [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o
00:03:48.437  [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:03:48.696  [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:03:48.696  [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:03:48.696  [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:03:48.696  [513/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o
00:03:48.696  [514/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:03:48.696  [515/710] Linking static target lib/librte_node.a
00:03:48.955  [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output)
00:03:48.955  [517/710] Linking target lib/librte_node.so.24.0
00:03:49.213  [518/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:03:49.213  [519/710] Linking static target drivers/libtmp_rte_bus_pci.a
00:03:49.213  [520/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:03:49.213  [521/710] Linking static target drivers/libtmp_rte_bus_vdev.a
00:03:49.213  [522/710] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:03:49.214  [523/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:03:49.472  [524/710] Linking static target drivers/librte_bus_pci.a
00:03:49.472  [525/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:03:49.472  [526/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:03:49.472  [527/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:03:49.472  [528/710] Linking static target drivers/librte_bus_vdev.a
00:03:49.731  [529/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o
00:03:49.731  [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:03:49.731  [531/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:49.731  [532/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o
00:03:49.731  [533/710] Linking target drivers/librte_bus_vdev.so.24.0
00:03:49.731  [534/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o
00:03:49.731  [535/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols
00:03:49.731  [536/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:03:49.731  [537/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:03:49.990  [538/710] Linking static target drivers/libtmp_rte_mempool_ring.a
00:03:49.990  [539/710] Linking target drivers/librte_bus_pci.so.24.0
00:03:49.990  [540/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols
00:03:49.990  [541/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:03:49.990  [542/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:03:49.990  [543/710] Linking static target drivers/librte_mempool_ring.a
00:03:49.990  [544/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:03:49.990  [545/710] Linking target drivers/librte_mempool_ring.so.24.0
00:03:50.249  [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o
00:03:50.507  [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o
00:03:51.074  [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o
00:03:51.074  [549/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o
00:03:51.074  [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o
00:03:51.075  [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a
00:03:52.010  [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o
00:03:52.010  [553/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o
00:03:52.010  [554/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a
00:03:52.010  [555/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o
00:03:52.010  [556/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a
00:03:52.010  [557/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o
00:03:52.578  [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o
00:03:52.578  [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o
00:03:52.578  [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o
00:03:52.837  [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o
00:03:52.837  [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o
00:03:53.404  [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o
00:03:53.404  [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o
00:03:53.404  [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o
00:03:53.661  [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o
00:03:53.919  [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o
00:03:53.919  [568/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o
00:03:54.178  [569/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o
00:03:54.178  [570/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o
00:03:54.178  [571/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o
00:03:54.178  [572/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:03:54.178  [573/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o
00:03:54.178  [574/710] Linking static target lib/librte_vhost.a
00:03:54.178  [575/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o
00:03:54.759  [576/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o
00:03:54.759  [577/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o
00:03:54.759  [578/710] Compiling C object app/dpdk-graph.p/graph_main.c.o
00:03:54.759  [579/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o
00:03:55.030  [580/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o
00:03:55.030  [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o
00:03:55.030  [582/710] Linking static target drivers/libtmp_rte_net_i40e.a
00:03:55.288  [583/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o
00:03:55.288  [584/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:03:55.288  [585/710] Generating drivers/rte_net_i40e.pmd.c with a custom command
00:03:55.288  [586/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o
00:03:55.289  [587/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o
00:03:55.289  [588/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o
00:03:55.289  [589/710] Linking target lib/librte_vhost.so.24.0
00:03:55.547  [590/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o
00:03:55.547  [591/710] Linking static target drivers/librte_net_i40e.a
00:03:55.547  [592/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o
00:03:55.547  [593/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o
00:03:55.547  [594/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o
00:03:56.114  [595/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o
00:03:56.114  [596/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output)
00:03:56.114  [597/710] Linking target drivers/librte_net_i40e.so.24.0
00:03:56.114  [598/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o
00:03:56.114  [599/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o
00:03:56.682  [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o
00:03:56.682  [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o
00:03:56.682  [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o
00:03:56.682  [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o
00:03:56.682  [604/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o
00:03:56.940  [605/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o
00:03:57.199  [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o
00:03:57.199  [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o
00:03:57.458  [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o
00:03:57.458  [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o
00:03:57.716  [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o
00:03:57.716  [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o
00:03:57.716  [612/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o
00:03:57.716  [613/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o
00:03:57.975  [614/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o
00:03:57.975  [615/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o
00:03:57.975  [616/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o
00:03:57.975  [617/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o
00:03:58.233  [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o
00:03:58.491  [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o
00:03:58.491  [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o
00:03:58.750  [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o
00:03:58.750  [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o
00:03:58.750  [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o
00:03:59.008  [624/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o
00:03:59.008  [625/710] Linking static target lib/librte_pipeline.a
00:03:59.576  [626/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o
00:03:59.576  [627/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o
00:03:59.834  [628/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o
00:03:59.834  [629/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o
00:03:59.834  [630/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o
00:03:59.834  [631/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o
00:04:00.093  [632/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o
00:04:00.093  [633/710] Linking target app/dpdk-dumpcap
00:04:00.093  [634/710] Linking target app/dpdk-graph
00:04:00.093  [635/710] Linking target app/dpdk-pdump
00:04:00.093  [636/710] Linking target app/dpdk-proc-info
00:04:00.351  [637/710] Linking target app/dpdk-test-acl
00:04:00.351  [638/710] Linking target app/dpdk-test-cmdline
00:04:00.610  [639/710] Linking target app/dpdk-test-compress-perf
00:04:00.610  [640/710] Linking target app/dpdk-test-crypto-perf
00:04:00.610  [641/710] Linking target app/dpdk-test-dma-perf
00:04:00.610  [642/710] Linking target app/dpdk-test-fib
00:04:00.610  [643/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o
00:04:00.868  [644/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o
00:04:00.868  [645/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o
00:04:01.127  [646/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o
00:04:01.127  [647/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o
00:04:01.127  [648/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o
00:04:01.385  [649/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o
00:04:01.385  [650/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o
00:04:01.385  [651/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o
00:04:01.644  [652/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o
00:04:01.644  [653/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o
00:04:01.644  [654/710] Linking target app/dpdk-test-gpudev
00:04:01.644  [655/710] Linking target app/dpdk-test-eventdev
00:04:01.903  [656/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o
00:04:01.903  [657/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output)
00:04:01.903  [658/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o
00:04:01.903  [659/710] Linking target lib/librte_pipeline.so.24.0
00:04:02.162  [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o
00:04:02.162  [661/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o
00:04:02.162  [662/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o
00:04:02.421  [663/710] Linking target app/dpdk-test-flow-perf
00:04:02.421  [664/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o
00:04:02.421  [665/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o
00:04:02.421  [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o
00:04:02.421  [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o
00:04:02.679  [668/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o
00:04:02.679  [669/710] Linking target app/dpdk-test-bbdev
00:04:02.938  [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o
00:04:02.938  [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o
00:04:02.938  [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o
00:04:02.938  [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o
00:04:03.204  [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o
00:04:03.204  [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o
00:04:03.461  [676/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o
00:04:03.461  [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o
00:04:03.718  [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o
00:04:03.718  [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o
00:04:03.718  [680/710] Linking target app/dpdk-test-pipeline
00:04:03.977  [681/710] Linking target app/dpdk-test-mldev
00:04:03.977  [682/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o
00:04:03.977  [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o
00:04:04.544  [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o
00:04:04.544  [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o
00:04:04.544  [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o
00:04:04.544  [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o
00:04:04.803  [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o
00:04:05.072  [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o
00:04:05.072  [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o
00:04:05.349  [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o
00:04:05.349  [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o
00:04:05.349  [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o
00:04:05.608  [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o
00:04:05.866  [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o
00:04:05.866  [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o
00:04:06.125  [697/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o
00:04:06.125  [698/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o
00:04:06.383  [699/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o
00:04:06.383  [700/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o
00:04:06.383  [701/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o
00:04:06.383  [702/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o
00:04:06.642  [703/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o
00:04:06.642  [704/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o
00:04:06.642  [705/710] Linking target app/dpdk-test-regex
00:04:06.900  [706/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o
00:04:06.900  [707/710] Linking target app/dpdk-test-sad
00:04:07.158  [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o
00:04:07.158  [709/710] Linking target app/dpdk-testpmd
00:04:07.417  [710/710] Linking target app/dpdk-test-security-perf
00:04:07.675    18:48:39 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s
00:04:07.675   18:48:39 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]]
00:04:07.676   18:48:39 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install
00:04:07.676  ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp'
00:04:07.676  [0/1] Installing files.
00:04:07.942  Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples
00:04:07.942  Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app
00:04:07.942  Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app
00:04:07.942  Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond
00:04:07.942  Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond
00:04:07.942  Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.943  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.944  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.945  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:04:07.946  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:04:07.947  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:04:07.947  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:04:07.947  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:04:07.947  Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq
00:04:07.947  Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq
00:04:07.947  Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb
00:04:07.947  Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb
00:04:07.947  Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:07.947  Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.206  Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.207  Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.468  Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.468  Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.468  Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.468  Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0
00:04:08.468  Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.468  Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0
00:04:08.468  Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.469  Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0
00:04:08.469  Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.469  Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0
00:04:08.469  Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.469  Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.469  Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.469  Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.469  Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.469  Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.469  Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.469  Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.469  Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.469  Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.469  Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.469  Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.469  Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.469  Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.469  Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.469  Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.469  Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.469  Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.469  Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.469  Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.469  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.470  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.471  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.472  Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.472  Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.472  Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.472  Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.472  Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.472  Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.472  Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.472  Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.472  Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.472  Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.472  Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.472  Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.472  Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.472  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.472  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.472  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.472  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.472  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:08.472  Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:08.472  Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig
00:04:08.472  Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig
00:04:08.472  Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24
00:04:08.472  Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so
00:04:08.472  Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24
00:04:08.472  Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so
00:04:08.472  Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24
00:04:08.472  Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so
00:04:08.472  Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24
00:04:08.472  Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so
00:04:08.472  Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24
00:04:08.472  Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so
00:04:08.472  Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24
00:04:08.472  Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so
00:04:08.472  Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24
00:04:08.472  Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so
00:04:08.472  Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24
00:04:08.472  Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so
00:04:08.472  Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24
00:04:08.472  Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so
00:04:08.472  Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24
00:04:08.472  Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so
00:04:08.472  Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24
00:04:08.472  Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so
00:04:08.472  Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24
00:04:08.472  Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so
00:04:08.472  Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24
00:04:08.472  Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so
00:04:08.472  Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24
00:04:08.472  Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so
00:04:08.472  Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24
00:04:08.472  Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so
00:04:08.472  Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24
00:04:08.472  Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so
00:04:08.472  Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24
00:04:08.472  Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so
00:04:08.472  Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24
00:04:08.472  Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so
00:04:08.472  Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24
00:04:08.472  Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so
00:04:08.472  Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24
00:04:08.472  Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so
00:04:08.472  Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24
00:04:08.472  Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so
00:04:08.472  Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24
00:04:08.472  Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so
00:04:08.472  Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24
00:04:08.472  Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so
00:04:08.472  Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24
00:04:08.472  Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so
00:04:08.472  Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24
00:04:08.472  Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so
00:04:08.472  Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24
00:04:08.472  Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so
00:04:08.472  Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24
00:04:08.472  Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so
00:04:08.472  Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24
00:04:08.472  Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so
00:04:08.472  Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24
00:04:08.472  Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so
00:04:08.472  Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24
00:04:08.472  Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so
00:04:08.472  Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24
00:04:08.472  Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so
00:04:08.472  Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24
00:04:08.472  Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so
00:04:08.472  Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24
00:04:08.472  Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so
00:04:08.472  Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24
00:04:08.472  Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so
00:04:08.472  Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24
00:04:08.472  Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so
00:04:08.472  Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24
00:04:08.472  Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so
00:04:08.472  Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24
00:04:08.472  Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so
00:04:08.472  Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24
00:04:08.472  Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so
00:04:08.472  Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24
00:04:08.472  Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so
00:04:08.472  Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24
00:04:08.472  Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so
00:04:08.472  Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24
00:04:08.472  Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so
00:04:08.472  Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24
00:04:08.473  Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so
00:04:08.473  Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24
00:04:08.473  Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so
00:04:08.473  Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24
00:04:08.473  Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so
00:04:08.473  Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24
00:04:08.473  Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so
00:04:08.473  './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so'
00:04:08.473  './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24'
00:04:08.473  './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0'
00:04:08.473  './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so'
00:04:08.473  './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24'
00:04:08.473  './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0'
00:04:08.473  './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so'
00:04:08.473  './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24'
00:04:08.473  './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0'
00:04:08.473  './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so'
00:04:08.473  './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24'
00:04:08.473  './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0'
00:04:08.473  Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24
00:04:08.473  Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so
00:04:08.473  Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24
00:04:08.473  Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so
00:04:08.473  Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24
00:04:08.473  Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so
00:04:08.473  Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24
00:04:08.473  Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so
00:04:08.473  Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24
00:04:08.473  Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so
00:04:08.473  Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24
00:04:08.473  Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so
00:04:08.473  Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24
00:04:08.473  Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so
00:04:08.473  Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24
00:04:08.473  Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so
00:04:08.473  Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24
00:04:08.473  Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so
00:04:08.473  Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24
00:04:08.473  Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so
00:04:08.473  Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24
00:04:08.473  Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so
00:04:08.473  Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24
00:04:08.473  Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so
00:04:08.473  Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24
00:04:08.473  Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so
00:04:08.473  Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24
00:04:08.473  Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so
00:04:08.473  Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24
00:04:08.473  Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so
00:04:08.473  Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0'
00:04:08.732   18:48:40 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat
00:04:08.732   18:48:40 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk
00:04:08.732  
00:04:08.732  real	0m59.748s
00:04:08.732  user	7m15.026s
00:04:08.732  sys	1m7.409s
00:04:08.732  ************************************
00:04:08.732  END TEST build_native_dpdk
00:04:08.732  ************************************
00:04:08.732   18:48:40 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:04:08.732   18:48:40 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x
00:04:08.732   18:48:40  -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:04:08.732   18:48:40  -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]]
00:04:08.732   18:48:40  -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:04:08.732   18:48:40  -- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:04:08.732   18:48:40  -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]]
00:04:08.732   18:48:40  -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]]
00:04:08.732   18:48:40  -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]]
00:04:08.732   18:48:40  -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared
00:04:08.732  Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs...
00:04:08.991  DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib
00:04:08.991  DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include
00:04:08.991  Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:04:09.559  Using 'verbs' RDMA provider
00:04:25.003  Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done.
00:04:37.226  Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done.
00:04:37.226  go version go1.21.1 linux/amd64
00:04:37.226  Creating mk/config.mk...done.
00:04:37.226  Creating mk/cc.flags.mk...done.
00:04:37.226  Type 'make' to build.
00:04:37.226   18:49:08  -- spdk/autobuild.sh@70 -- $ run_test make make -j10
00:04:37.226   18:49:08  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:04:37.226   18:49:08  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:04:37.226   18:49:08  -- common/autotest_common.sh@10 -- $ set +x
00:04:37.226  ************************************
00:04:37.226  START TEST make
00:04:37.226  ************************************
00:04:37.226   18:49:08 make -- common/autotest_common.sh@1129 -- $ make -j10
00:04:38.614  The Meson build system
00:04:38.614  Version: 1.5.0
00:04:38.614  Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user
00:04:38.614  Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug
00:04:38.614  Build type: native build
00:04:38.614  Project name: libvfio-user
00:04:38.614  Project version: 0.0.1
00:04:38.614  C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:04:38.614  C linker for the host machine: gcc ld.bfd 2.40-14
00:04:38.614  Host machine cpu family: x86_64
00:04:38.614  Host machine cpu: x86_64
00:04:38.614  Run-time dependency threads found: YES
00:04:38.614  Library dl found: YES
00:04:38.614  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:04:38.614  Run-time dependency json-c found: YES 0.17
00:04:38.614  Run-time dependency cmocka found: YES 1.1.7
00:04:38.614  Program pytest-3 found: NO
00:04:38.614  Program flake8 found: NO
00:04:38.614  Program misspell-fixer found: NO
00:04:38.614  Program restructuredtext-lint found: NO
00:04:38.614  Program valgrind found: YES (/usr/bin/valgrind)
00:04:38.614  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:04:38.614  Compiler for C supports arguments -Wmissing-declarations: YES 
00:04:38.614  Compiler for C supports arguments -Wwrite-strings: YES 
00:04:38.614  ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup.
00:04:38.614  Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh)
00:04:38.614  Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh)
00:04:38.614  ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup.
00:04:38.614  Build targets in project: 8
00:04:38.614  WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions:
00:04:38.614   * 0.57.0: {'exclude_suites arg in add_test_setup'}
00:04:38.614  
00:04:38.614  libvfio-user 0.0.1
00:04:38.614  
00:04:38.614    User defined options
00:04:38.614      buildtype      : debug
00:04:38.614      default_library: shared
00:04:38.614      libdir         : /usr/local/lib
00:04:38.614  
00:04:38.614  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:04:38.873  ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug'
00:04:39.132  [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o
00:04:39.132  [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o
00:04:39.132  [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o
00:04:39.132  [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o
00:04:39.132  [5/37] Compiling C object samples/client.p/.._lib_migration.c.o
00:04:39.132  [6/37] Compiling C object samples/client.p/.._lib_tran.c.o
00:04:39.132  [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o
00:04:39.132  [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o
00:04:39.132  [9/37] Compiling C object samples/lspci.p/lspci.c.o
00:04:39.390  [10/37] Compiling C object samples/null.p/null.c.o
00:04:39.390  [11/37] Compiling C object samples/client.p/client.c.o
00:04:39.390  [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o
00:04:39.390  [13/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o
00:04:39.391  [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o
00:04:39.391  [15/37] Compiling C object samples/server.p/server.c.o
00:04:39.391  [16/37] Compiling C object test/unit_tests.p/mocks.c.o
00:04:39.391  [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o
00:04:39.391  [18/37] Compiling C object test/unit_tests.p/unit-tests.c.o
00:04:39.391  [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o
00:04:39.391  [20/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o
00:04:39.391  [21/37] Linking target samples/client
00:04:39.391  [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o
00:04:39.391  [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o
00:04:39.391  [24/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o
00:04:39.649  [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o
00:04:39.649  [26/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o
00:04:39.649  [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o
00:04:39.649  [28/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o
00:04:39.649  [29/37] Linking target lib/libvfio-user.so.0.0.1
00:04:39.649  [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o
00:04:39.649  [31/37] Linking target test/unit_tests
00:04:39.908  [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols
00:04:39.908  [33/37] Linking target samples/gpio-pci-idio-16
00:04:39.908  [34/37] Linking target samples/shadow_ioeventfd_server
00:04:39.908  [35/37] Linking target samples/null
00:04:39.908  [36/37] Linking target samples/server
00:04:39.908  [37/37] Linking target samples/lspci
00:04:39.908  INFO: autodetecting backend as ninja
00:04:39.908  INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug
00:04:39.908  DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug
00:04:40.476  ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug'
00:04:40.476  ninja: no work to do.
00:05:27.151    CC lib/log/log.o
00:05:27.151    CC lib/log/log_flags.o
00:05:27.151    CC lib/log/log_deprecated.o
00:05:27.151    CC lib/ut/ut.o
00:05:27.151    CC lib/ut_mock/mock.o
00:05:27.151    LIB libspdk_ut_mock.a
00:05:27.151    LIB libspdk_ut.a
00:05:27.151    LIB libspdk_log.a
00:05:27.151    SO libspdk_ut.so.2.0
00:05:27.151    SO libspdk_ut_mock.so.6.0
00:05:27.151    SO libspdk_log.so.7.1
00:05:27.151    SYMLINK libspdk_ut_mock.so
00:05:27.151    SYMLINK libspdk_ut.so
00:05:27.151    SYMLINK libspdk_log.so
00:05:27.151    CC lib/util/base64.o
00:05:27.151    CC lib/util/bit_array.o
00:05:27.151    CC lib/util/cpuset.o
00:05:27.151    CC lib/dma/dma.o
00:05:27.151    CC lib/util/crc16.o
00:05:27.151    CC lib/util/crc32.o
00:05:27.151    CC lib/util/crc32c.o
00:05:27.151    CXX lib/trace_parser/trace.o
00:05:27.151    CC lib/ioat/ioat.o
00:05:27.151    CC lib/vfio_user/host/vfio_user_pci.o
00:05:27.151    CC lib/vfio_user/host/vfio_user.o
00:05:27.151    CC lib/util/crc32_ieee.o
00:05:27.151    CC lib/util/crc64.o
00:05:27.151    CC lib/util/dif.o
00:05:27.151    CC lib/util/fd.o
00:05:27.151    LIB libspdk_dma.a
00:05:27.151    CC lib/util/fd_group.o
00:05:27.151    SO libspdk_dma.so.5.0
00:05:27.151    SYMLINK libspdk_dma.so
00:05:27.151    CC lib/util/file.o
00:05:27.151    CC lib/util/hexlify.o
00:05:27.151    CC lib/util/iov.o
00:05:27.151    LIB libspdk_ioat.a
00:05:27.151    SO libspdk_ioat.so.7.0
00:05:27.151    LIB libspdk_vfio_user.a
00:05:27.151    CC lib/util/math.o
00:05:27.151    CC lib/util/net.o
00:05:27.151    SYMLINK libspdk_ioat.so
00:05:27.151    CC lib/util/pipe.o
00:05:27.151    SO libspdk_vfio_user.so.5.0
00:05:27.151    CC lib/util/strerror_tls.o
00:05:27.151    SYMLINK libspdk_vfio_user.so
00:05:27.151    CC lib/util/string.o
00:05:27.151    CC lib/util/uuid.o
00:05:27.151    CC lib/util/xor.o
00:05:27.151    CC lib/util/zipf.o
00:05:27.151    CC lib/util/md5.o
00:05:27.151    LIB libspdk_util.a
00:05:27.151    SO libspdk_util.so.10.1
00:05:27.151    LIB libspdk_trace_parser.a
00:05:27.151    SO libspdk_trace_parser.so.6.0
00:05:27.151    SYMLINK libspdk_util.so
00:05:27.151    SYMLINK libspdk_trace_parser.so
00:05:27.151    CC lib/conf/conf.o
00:05:27.151    CC lib/json/json_parse.o
00:05:27.151    CC lib/json/json_util.o
00:05:27.151    CC lib/env_dpdk/env.o
00:05:27.151    CC lib/json/json_write.o
00:05:27.151    CC lib/env_dpdk/memory.o
00:05:27.151    CC lib/env_dpdk/pci.o
00:05:27.151    CC lib/idxd/idxd.o
00:05:27.151    CC lib/vmd/vmd.o
00:05:27.151    CC lib/rdma_utils/rdma_utils.o
00:05:27.151    LIB libspdk_conf.a
00:05:27.151    CC lib/vmd/led.o
00:05:27.151    CC lib/env_dpdk/init.o
00:05:27.151    SO libspdk_conf.so.6.0
00:05:27.151    LIB libspdk_rdma_utils.a
00:05:27.151    LIB libspdk_json.a
00:05:27.151    SO libspdk_rdma_utils.so.1.0
00:05:27.151    SYMLINK libspdk_conf.so
00:05:27.151    CC lib/env_dpdk/threads.o
00:05:27.151    SO libspdk_json.so.6.0
00:05:27.151    CC lib/env_dpdk/pci_ioat.o
00:05:27.151    SYMLINK libspdk_rdma_utils.so
00:05:27.151    CC lib/env_dpdk/pci_virtio.o
00:05:27.151    CC lib/env_dpdk/pci_vmd.o
00:05:27.151    SYMLINK libspdk_json.so
00:05:27.151    CC lib/env_dpdk/pci_idxd.o
00:05:27.151    CC lib/env_dpdk/pci_event.o
00:05:27.151    CC lib/env_dpdk/sigbus_handler.o
00:05:27.151    CC lib/idxd/idxd_user.o
00:05:27.151    CC lib/env_dpdk/pci_dpdk.o
00:05:27.151    CC lib/env_dpdk/pci_dpdk_2207.o
00:05:27.151    LIB libspdk_vmd.a
00:05:27.151    SO libspdk_vmd.so.6.0
00:05:27.151    CC lib/rdma_provider/common.o
00:05:27.410    CC lib/jsonrpc/jsonrpc_server.o
00:05:27.410    CC lib/env_dpdk/pci_dpdk_2211.o
00:05:27.410    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:05:27.410    SYMLINK libspdk_vmd.so
00:05:27.410    CC lib/jsonrpc/jsonrpc_client.o
00:05:27.410    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:05:27.410    CC lib/idxd/idxd_kernel.o
00:05:27.410    CC lib/rdma_provider/rdma_provider_verbs.o
00:05:27.677    LIB libspdk_idxd.a
00:05:27.677    LIB libspdk_jsonrpc.a
00:05:27.677    SO libspdk_idxd.so.12.1
00:05:27.677    LIB libspdk_rdma_provider.a
00:05:27.677    SO libspdk_jsonrpc.so.6.0
00:05:27.677    SO libspdk_rdma_provider.so.7.0
00:05:27.677    SYMLINK libspdk_idxd.so
00:05:27.677    SYMLINK libspdk_jsonrpc.so
00:05:27.677    SYMLINK libspdk_rdma_provider.so
00:05:27.949    LIB libspdk_env_dpdk.a
00:05:27.949    CC lib/rpc/rpc.o
00:05:27.949    SO libspdk_env_dpdk.so.15.1
00:05:28.208    SYMLINK libspdk_env_dpdk.so
00:05:28.208    LIB libspdk_rpc.a
00:05:28.208    SO libspdk_rpc.so.6.0
00:05:28.208    SYMLINK libspdk_rpc.so
00:05:28.467    CC lib/keyring/keyring.o
00:05:28.467    CC lib/trace/trace.o
00:05:28.467    CC lib/keyring/keyring_rpc.o
00:05:28.467    CC lib/trace/trace_flags.o
00:05:28.467    CC lib/trace/trace_rpc.o
00:05:28.467    CC lib/notify/notify.o
00:05:28.467    CC lib/notify/notify_rpc.o
00:05:28.725    LIB libspdk_notify.a
00:05:28.725    SO libspdk_notify.so.6.0
00:05:28.725    LIB libspdk_keyring.a
00:05:28.725    LIB libspdk_trace.a
00:05:28.725    SYMLINK libspdk_notify.so
00:05:28.984    SO libspdk_keyring.so.2.0
00:05:28.984    SO libspdk_trace.so.11.0
00:05:28.984    SYMLINK libspdk_keyring.so
00:05:28.984    SYMLINK libspdk_trace.so
00:05:29.242    CC lib/thread/iobuf.o
00:05:29.242    CC lib/thread/thread.o
00:05:29.242    CC lib/sock/sock.o
00:05:29.242    CC lib/sock/sock_rpc.o
00:05:29.809    LIB libspdk_sock.a
00:05:29.809    SO libspdk_sock.so.10.0
00:05:29.809    SYMLINK libspdk_sock.so
00:05:30.067    CC lib/nvme/nvme_ctrlr_cmd.o
00:05:30.067    CC lib/nvme/nvme_ctrlr.o
00:05:30.067    CC lib/nvme/nvme_fabric.o
00:05:30.067    CC lib/nvme/nvme_ns_cmd.o
00:05:30.067    CC lib/nvme/nvme_ns.o
00:05:30.067    CC lib/nvme/nvme_pcie_common.o
00:05:30.067    CC lib/nvme/nvme_pcie.o
00:05:30.067    CC lib/nvme/nvme_qpair.o
00:05:30.067    CC lib/nvme/nvme.o
00:05:31.002    LIB libspdk_thread.a
00:05:31.002    SO libspdk_thread.so.11.0
00:05:31.002    CC lib/nvme/nvme_quirks.o
00:05:31.002    CC lib/nvme/nvme_transport.o
00:05:31.002    SYMLINK libspdk_thread.so
00:05:31.002    CC lib/nvme/nvme_discovery.o
00:05:31.002    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:05:31.002    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:05:31.002    CC lib/nvme/nvme_tcp.o
00:05:31.002    CC lib/accel/accel.o
00:05:31.259    CC lib/nvme/nvme_opal.o
00:05:31.259    CC lib/nvme/nvme_io_msg.o
00:05:31.517    CC lib/accel/accel_rpc.o
00:05:31.774    CC lib/accel/accel_sw.o
00:05:31.774    CC lib/nvme/nvme_poll_group.o
00:05:31.774    CC lib/nvme/nvme_zns.o
00:05:31.774    CC lib/nvme/nvme_stubs.o
00:05:31.774    CC lib/nvme/nvme_auth.o
00:05:31.774    CC lib/nvme/nvme_cuse.o
00:05:32.032    CC lib/blob/blobstore.o
00:05:32.032    CC lib/init/json_config.o
00:05:32.290    LIB libspdk_accel.a
00:05:32.290    SO libspdk_accel.so.16.0
00:05:32.290    SYMLINK libspdk_accel.so
00:05:32.290    CC lib/blob/request.o
00:05:32.290    CC lib/nvme/nvme_vfio_user.o
00:05:32.290    CC lib/blob/zeroes.o
00:05:32.290    CC lib/init/subsystem.o
00:05:32.290    CC lib/blob/blob_bs_dev.o
00:05:32.548    CC lib/init/subsystem_rpc.o
00:05:32.548    CC lib/init/rpc.o
00:05:32.548    CC lib/virtio/virtio.o
00:05:32.806    CC lib/virtio/virtio_vhost_user.o
00:05:32.806    CC lib/virtio/virtio_vfio_user.o
00:05:32.806    CC lib/virtio/virtio_pci.o
00:05:32.806    CC lib/nvme/nvme_rdma.o
00:05:32.806    CC lib/vfu_tgt/tgt_endpoint.o
00:05:32.806    LIB libspdk_init.a
00:05:32.806    SO libspdk_init.so.6.0
00:05:33.064    SYMLINK libspdk_init.so
00:05:33.064    CC lib/vfu_tgt/tgt_rpc.o
00:05:33.064    CC lib/fsdev/fsdev.o
00:05:33.064    CC lib/fsdev/fsdev_io.o
00:05:33.064    LIB libspdk_virtio.a
00:05:33.064    CC lib/fsdev/fsdev_rpc.o
00:05:33.064    SO libspdk_virtio.so.7.0
00:05:33.064    LIB libspdk_vfu_tgt.a
00:05:33.064    CC lib/bdev/bdev.o
00:05:33.064    CC lib/bdev/bdev_rpc.o
00:05:33.064    SYMLINK libspdk_virtio.so
00:05:33.064    CC lib/event/app.o
00:05:33.064    CC lib/bdev/bdev_zone.o
00:05:33.064    SO libspdk_vfu_tgt.so.3.0
00:05:33.322    CC lib/bdev/part.o
00:05:33.322    SYMLINK libspdk_vfu_tgt.so
00:05:33.322    CC lib/bdev/scsi_nvme.o
00:05:33.322    CC lib/event/reactor.o
00:05:33.322    CC lib/event/log_rpc.o
00:05:33.322    CC lib/event/app_rpc.o
00:05:33.580    CC lib/event/scheduler_static.o
00:05:33.580    LIB libspdk_fsdev.a
00:05:33.580    SO libspdk_fsdev.so.2.0
00:05:33.580    SYMLINK libspdk_fsdev.so
00:05:33.838    LIB libspdk_event.a
00:05:33.838    SO libspdk_event.so.14.0
00:05:33.838    SYMLINK libspdk_event.so
00:05:34.096    CC lib/fuse_dispatcher/fuse_dispatcher.o
00:05:34.096    LIB libspdk_nvme.a
00:05:34.355    SO libspdk_nvme.so.15.0
00:05:34.613    LIB libspdk_fuse_dispatcher.a
00:05:34.613    SYMLINK libspdk_nvme.so
00:05:34.613    SO libspdk_fuse_dispatcher.so.1.0
00:05:34.613    SYMLINK libspdk_fuse_dispatcher.so
00:05:35.180    LIB libspdk_blob.a
00:05:35.180    SO libspdk_blob.so.12.0
00:05:35.180    SYMLINK libspdk_blob.so
00:05:35.437    CC lib/lvol/lvol.o
00:05:35.437    CC lib/blobfs/blobfs.o
00:05:35.437    CC lib/blobfs/tree.o
00:05:36.004    LIB libspdk_bdev.a
00:05:36.004    SO libspdk_bdev.so.17.0
00:05:36.004    SYMLINK libspdk_bdev.so
00:05:36.262    CC lib/scsi/dev.o
00:05:36.262    CC lib/scsi/lun.o
00:05:36.262    CC lib/scsi/port.o
00:05:36.262    CC lib/scsi/scsi.o
00:05:36.262    CC lib/ftl/ftl_core.o
00:05:36.262    CC lib/nvmf/ctrlr.o
00:05:36.262    CC lib/ublk/ublk.o
00:05:36.262    CC lib/nbd/nbd.o
00:05:36.262    LIB libspdk_blobfs.a
00:05:36.262    SO libspdk_blobfs.so.11.0
00:05:36.521    CC lib/nbd/nbd_rpc.o
00:05:36.521    LIB libspdk_lvol.a
00:05:36.521    SYMLINK libspdk_blobfs.so
00:05:36.521    CC lib/ublk/ublk_rpc.o
00:05:36.521    CC lib/ftl/ftl_init.o
00:05:36.521    SO libspdk_lvol.so.11.0
00:05:36.521    SYMLINK libspdk_lvol.so
00:05:36.521    CC lib/scsi/scsi_bdev.o
00:05:36.521    CC lib/scsi/scsi_pr.o
00:05:36.521    CC lib/scsi/scsi_rpc.o
00:05:36.521    CC lib/nvmf/ctrlr_discovery.o
00:05:36.521    CC lib/ftl/ftl_layout.o
00:05:36.779    CC lib/ftl/ftl_debug.o
00:05:36.779    CC lib/scsi/task.o
00:05:36.779    LIB libspdk_nbd.a
00:05:36.779    CC lib/ftl/ftl_io.o
00:05:36.779    SO libspdk_nbd.so.7.0
00:05:36.779    SYMLINK libspdk_nbd.so
00:05:36.779    CC lib/ftl/ftl_sb.o
00:05:36.779    CC lib/nvmf/ctrlr_bdev.o
00:05:37.038    CC lib/nvmf/subsystem.o
00:05:37.038    CC lib/ftl/ftl_l2p.o
00:05:37.038    CC lib/nvmf/nvmf.o
00:05:37.038    LIB libspdk_ublk.a
00:05:37.038    SO libspdk_ublk.so.3.0
00:05:37.038    LIB libspdk_scsi.a
00:05:37.038    CC lib/nvmf/nvmf_rpc.o
00:05:37.038    SO libspdk_scsi.so.9.0
00:05:37.038    SYMLINK libspdk_ublk.so
00:05:37.038    CC lib/nvmf/transport.o
00:05:37.038    CC lib/nvmf/tcp.o
00:05:37.038    CC lib/nvmf/stubs.o
00:05:37.296    SYMLINK libspdk_scsi.so
00:05:37.296    CC lib/ftl/ftl_l2p_flat.o
00:05:37.296    CC lib/nvmf/mdns_server.o
00:05:37.296    CC lib/ftl/ftl_nv_cache.o
00:05:37.554    CC lib/nvmf/vfio_user.o
00:05:37.554    CC lib/nvmf/rdma.o
00:05:37.554    CC lib/nvmf/auth.o
00:05:37.813    CC lib/ftl/ftl_band.o
00:05:37.813    CC lib/ftl/ftl_band_ops.o
00:05:37.813    CC lib/ftl/ftl_writer.o
00:05:38.076    CC lib/ftl/ftl_rq.o
00:05:38.076    CC lib/ftl/ftl_reloc.o
00:05:38.076    CC lib/ftl/ftl_l2p_cache.o
00:05:38.405    CC lib/iscsi/conn.o
00:05:38.405    CC lib/iscsi/init_grp.o
00:05:38.405    CC lib/iscsi/iscsi.o
00:05:38.405    CC lib/vhost/vhost.o
00:05:38.405    CC lib/vhost/vhost_rpc.o
00:05:38.405    CC lib/vhost/vhost_scsi.o
00:05:38.662    CC lib/vhost/vhost_blk.o
00:05:38.662    CC lib/vhost/rte_vhost_user.o
00:05:38.662    CC lib/ftl/ftl_p2l.o
00:05:38.921    CC lib/ftl/ftl_p2l_log.o
00:05:39.179    CC lib/ftl/mngt/ftl_mngt.o
00:05:39.179    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:05:39.179    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:05:39.179    CC lib/iscsi/param.o
00:05:39.437    CC lib/ftl/mngt/ftl_mngt_startup.o
00:05:39.437    CC lib/ftl/mngt/ftl_mngt_md.o
00:05:39.437    CC lib/iscsi/portal_grp.o
00:05:39.437    CC lib/ftl/mngt/ftl_mngt_misc.o
00:05:39.437    CC lib/iscsi/tgt_node.o
00:05:39.437    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:05:39.437    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:05:39.696    LIB libspdk_nvmf.a
00:05:39.696    CC lib/ftl/mngt/ftl_mngt_band.o
00:05:39.696    LIB libspdk_vhost.a
00:05:39.696    CC lib/iscsi/iscsi_subsystem.o
00:05:39.696    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:05:39.696    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:05:39.696    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:05:39.696    SO libspdk_vhost.so.8.0
00:05:39.696    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:05:39.696    SO libspdk_nvmf.so.20.0
00:05:39.696    CC lib/ftl/utils/ftl_conf.o
00:05:39.954    SYMLINK libspdk_vhost.so
00:05:39.954    CC lib/ftl/utils/ftl_md.o
00:05:39.954    CC lib/iscsi/iscsi_rpc.o
00:05:39.954    CC lib/iscsi/task.o
00:05:39.954    CC lib/ftl/utils/ftl_mempool.o
00:05:39.954    CC lib/ftl/utils/ftl_bitmap.o
00:05:39.954    CC lib/ftl/utils/ftl_property.o
00:05:39.954    SYMLINK libspdk_nvmf.so
00:05:39.954    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:05:39.954    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:05:40.212    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:05:40.212    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:05:40.212    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:05:40.212    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:05:40.212    CC lib/ftl/upgrade/ftl_trim_upgrade.o
00:05:40.212    CC lib/ftl/upgrade/ftl_sb_v3.o
00:05:40.212    CC lib/ftl/upgrade/ftl_sb_v5.o
00:05:40.212    CC lib/ftl/nvc/ftl_nvc_dev.o
00:05:40.212    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:05:40.212    LIB libspdk_iscsi.a
00:05:40.470    CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o
00:05:40.470    SO libspdk_iscsi.so.8.0
00:05:40.470    CC lib/ftl/nvc/ftl_nvc_bdev_common.o
00:05:40.470    CC lib/ftl/base/ftl_base_dev.o
00:05:40.470    CC lib/ftl/base/ftl_base_bdev.o
00:05:40.470    CC lib/ftl/ftl_trace.o
00:05:40.470    SYMLINK libspdk_iscsi.so
00:05:40.728    LIB libspdk_ftl.a
00:05:40.985    SO libspdk_ftl.so.9.0
00:05:41.243    SYMLINK libspdk_ftl.so
00:05:41.501    CC module/env_dpdk/env_dpdk_rpc.o
00:05:41.501    CC module/vfu_device/vfu_virtio.o
00:05:41.501    CC module/fsdev/aio/fsdev_aio.o
00:05:41.501    CC module/scheduler/dynamic/scheduler_dynamic.o
00:05:41.501    CC module/scheduler/gscheduler/gscheduler.o
00:05:41.501    CC module/sock/posix/posix.o
00:05:41.501    CC module/blob/bdev/blob_bdev.o
00:05:41.501    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:05:41.501    CC module/accel/error/accel_error.o
00:05:41.759    CC module/keyring/file/keyring.o
00:05:41.759    LIB libspdk_env_dpdk_rpc.a
00:05:41.759    SO libspdk_env_dpdk_rpc.so.6.0
00:05:41.759    SYMLINK libspdk_env_dpdk_rpc.so
00:05:41.759    CC module/keyring/file/keyring_rpc.o
00:05:41.759    LIB libspdk_scheduler_gscheduler.a
00:05:41.759    LIB libspdk_scheduler_dpdk_governor.a
00:05:41.759    SO libspdk_scheduler_gscheduler.so.4.0
00:05:41.759    SO libspdk_scheduler_dpdk_governor.so.4.0
00:05:41.759    LIB libspdk_scheduler_dynamic.a
00:05:41.759    CC module/accel/error/accel_error_rpc.o
00:05:41.759    SO libspdk_scheduler_dynamic.so.4.0
00:05:41.759    SYMLINK libspdk_scheduler_gscheduler.so
00:05:41.759    SYMLINK libspdk_scheduler_dpdk_governor.so
00:05:41.759    CC module/vfu_device/vfu_virtio_blk.o
00:05:41.759    CC module/vfu_device/vfu_virtio_scsi.o
00:05:41.759    LIB libspdk_keyring_file.a
00:05:42.020    LIB libspdk_blob_bdev.a
00:05:42.020    SO libspdk_keyring_file.so.2.0
00:05:42.020    SYMLINK libspdk_scheduler_dynamic.so
00:05:42.020    CC module/vfu_device/vfu_virtio_rpc.o
00:05:42.020    SO libspdk_blob_bdev.so.12.0
00:05:42.020    CC module/accel/ioat/accel_ioat.o
00:05:42.020    SYMLINK libspdk_keyring_file.so
00:05:42.020    LIB libspdk_accel_error.a
00:05:42.021    SYMLINK libspdk_blob_bdev.so
00:05:42.021    CC module/accel/ioat/accel_ioat_rpc.o
00:05:42.021    SO libspdk_accel_error.so.2.0
00:05:42.021    SYMLINK libspdk_accel_error.so
00:05:42.021    CC module/vfu_device/vfu_virtio_fs.o
00:05:42.021    CC module/fsdev/aio/fsdev_aio_rpc.o
00:05:42.280    CC module/fsdev/aio/linux_aio_mgr.o
00:05:42.280    LIB libspdk_accel_ioat.a
00:05:42.280    CC module/keyring/linux/keyring.o
00:05:42.280    CC module/keyring/linux/keyring_rpc.o
00:05:42.280    SO libspdk_accel_ioat.so.6.0
00:05:42.280    SYMLINK libspdk_accel_ioat.so
00:05:42.280    LIB libspdk_keyring_linux.a
00:05:42.280    LIB libspdk_fsdev_aio.a
00:05:42.280    LIB libspdk_vfu_device.a
00:05:42.280    SO libspdk_keyring_linux.so.1.0
00:05:42.280    SO libspdk_fsdev_aio.so.1.0
00:05:42.538    LIB libspdk_sock_posix.a
00:05:42.538    SO libspdk_vfu_device.so.3.0
00:05:42.538    SYMLINK libspdk_keyring_linux.so
00:05:42.538    SO libspdk_sock_posix.so.6.0
00:05:42.538    CC module/bdev/delay/vbdev_delay.o
00:05:42.538    SYMLINK libspdk_fsdev_aio.so
00:05:42.538    CC module/accel/dsa/accel_dsa.o
00:05:42.538    CC module/accel/iaa/accel_iaa.o
00:05:42.538    SYMLINK libspdk_vfu_device.so
00:05:42.538    CC module/bdev/error/vbdev_error.o
00:05:42.538    CC module/bdev/gpt/gpt.o
00:05:42.538    SYMLINK libspdk_sock_posix.so
00:05:42.538    CC module/blobfs/bdev/blobfs_bdev.o
00:05:42.538    CC module/bdev/lvol/vbdev_lvol.o
00:05:42.538    CC module/bdev/malloc/bdev_malloc.o
00:05:42.796    CC module/bdev/null/bdev_null.o
00:05:42.796    CC module/accel/iaa/accel_iaa_rpc.o
00:05:42.796    CC module/bdev/gpt/vbdev_gpt.o
00:05:42.796    CC module/bdev/nvme/bdev_nvme.o
00:05:42.796    CC module/accel/dsa/accel_dsa_rpc.o
00:05:42.796    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:05:42.796    CC module/bdev/error/vbdev_error_rpc.o
00:05:42.796    CC module/bdev/delay/vbdev_delay_rpc.o
00:05:42.796    LIB libspdk_accel_iaa.a
00:05:42.796    SO libspdk_accel_iaa.so.3.0
00:05:42.796    LIB libspdk_accel_dsa.a
00:05:43.055    SO libspdk_accel_dsa.so.5.0
00:05:43.055    SYMLINK libspdk_accel_iaa.so
00:05:43.055    CC module/bdev/nvme/bdev_nvme_rpc.o
00:05:43.055    LIB libspdk_blobfs_bdev.a
00:05:43.055    LIB libspdk_bdev_error.a
00:05:43.055    CC module/bdev/null/bdev_null_rpc.o
00:05:43.055    SYMLINK libspdk_accel_dsa.so
00:05:43.055    SO libspdk_blobfs_bdev.so.6.0
00:05:43.055    SO libspdk_bdev_error.so.6.0
00:05:43.055    LIB libspdk_bdev_gpt.a
00:05:43.055    CC module/bdev/malloc/bdev_malloc_rpc.o
00:05:43.055    LIB libspdk_bdev_delay.a
00:05:43.055    SO libspdk_bdev_gpt.so.6.0
00:05:43.055    SO libspdk_bdev_delay.so.6.0
00:05:43.055    SYMLINK libspdk_bdev_error.so
00:05:43.055    SYMLINK libspdk_blobfs_bdev.so
00:05:43.055    SYMLINK libspdk_bdev_gpt.so
00:05:43.055    SYMLINK libspdk_bdev_delay.so
00:05:43.055    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:05:43.313    LIB libspdk_bdev_null.a
00:05:43.313    CC module/bdev/passthru/vbdev_passthru.o
00:05:43.313    SO libspdk_bdev_null.so.6.0
00:05:43.313    LIB libspdk_bdev_malloc.a
00:05:43.313    SO libspdk_bdev_malloc.so.6.0
00:05:43.313    SYMLINK libspdk_bdev_null.so
00:05:43.313    CC module/bdev/raid/bdev_raid.o
00:05:43.313    CC module/bdev/split/vbdev_split.o
00:05:43.313    CC module/bdev/zone_block/vbdev_zone_block.o
00:05:43.313    SYMLINK libspdk_bdev_malloc.so
00:05:43.313    CC module/bdev/aio/bdev_aio.o
00:05:43.571    CC module/bdev/ftl/bdev_ftl.o
00:05:43.571    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:05:43.571    CC module/bdev/iscsi/bdev_iscsi.o
00:05:43.571    LIB libspdk_bdev_lvol.a
00:05:43.571    CC module/bdev/split/vbdev_split_rpc.o
00:05:43.571    SO libspdk_bdev_lvol.so.6.0
00:05:43.571    CC module/bdev/ftl/bdev_ftl_rpc.o
00:05:43.571    SYMLINK libspdk_bdev_lvol.so
00:05:43.571    CC module/bdev/raid/bdev_raid_rpc.o
00:05:43.829    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:05:43.829    LIB libspdk_bdev_passthru.a
00:05:43.829    SO libspdk_bdev_passthru.so.6.0
00:05:43.829    LIB libspdk_bdev_split.a
00:05:43.829    CC module/bdev/aio/bdev_aio_rpc.o
00:05:43.829    SO libspdk_bdev_split.so.6.0
00:05:43.829    CC module/bdev/raid/bdev_raid_sb.o
00:05:43.829    SYMLINK libspdk_bdev_passthru.so
00:05:43.829    CC module/bdev/raid/raid0.o
00:05:43.829    SYMLINK libspdk_bdev_split.so
00:05:43.829    CC module/bdev/raid/raid1.o
00:05:43.829    LIB libspdk_bdev_ftl.a
00:05:43.829    LIB libspdk_bdev_zone_block.a
00:05:43.829    CC module/bdev/raid/concat.o
00:05:43.829    SO libspdk_bdev_ftl.so.6.0
00:05:43.829    SO libspdk_bdev_zone_block.so.6.0
00:05:43.829    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:05:43.829    LIB libspdk_bdev_aio.a
00:05:44.088    SYMLINK libspdk_bdev_ftl.so
00:05:44.088    SO libspdk_bdev_aio.so.6.0
00:05:44.088    SYMLINK libspdk_bdev_zone_block.so
00:05:44.088    CC module/bdev/nvme/nvme_rpc.o
00:05:44.088    CC module/bdev/nvme/bdev_mdns_client.o
00:05:44.088    SYMLINK libspdk_bdev_aio.so
00:05:44.088    CC module/bdev/nvme/vbdev_opal.o
00:05:44.088    CC module/bdev/nvme/vbdev_opal_rpc.o
00:05:44.088    LIB libspdk_bdev_iscsi.a
00:05:44.088    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:05:44.088    SO libspdk_bdev_iscsi.so.6.0
00:05:44.088    SYMLINK libspdk_bdev_iscsi.so
00:05:44.346    CC module/bdev/virtio/bdev_virtio_scsi.o
00:05:44.346    CC module/bdev/virtio/bdev_virtio_blk.o
00:05:44.346    CC module/bdev/virtio/bdev_virtio_rpc.o
00:05:44.346    LIB libspdk_bdev_raid.a
00:05:44.346    SO libspdk_bdev_raid.so.6.0
00:05:44.346    SYMLINK libspdk_bdev_raid.so
00:05:44.605    LIB libspdk_bdev_virtio.a
00:05:44.863    SO libspdk_bdev_virtio.so.6.0
00:05:44.863    SYMLINK libspdk_bdev_virtio.so
00:05:45.121    LIB libspdk_bdev_nvme.a
00:05:45.380    SO libspdk_bdev_nvme.so.7.1
00:05:45.380    SYMLINK libspdk_bdev_nvme.so
00:05:45.946    CC module/event/subsystems/sock/sock.o
00:05:45.946    CC module/event/subsystems/iobuf/iobuf.o
00:05:45.946    CC module/event/subsystems/fsdev/fsdev.o
00:05:45.946    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:05:45.946    CC module/event/subsystems/vmd/vmd.o
00:05:45.946    CC module/event/subsystems/vmd/vmd_rpc.o
00:05:45.946    CC module/event/subsystems/scheduler/scheduler.o
00:05:45.946    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:05:45.946    CC module/event/subsystems/vfu_tgt/vfu_tgt.o
00:05:45.946    CC module/event/subsystems/keyring/keyring.o
00:05:45.946    LIB libspdk_event_scheduler.a
00:05:45.946    LIB libspdk_event_keyring.a
00:05:45.946    LIB libspdk_event_vfu_tgt.a
00:05:45.946    LIB libspdk_event_fsdev.a
00:05:45.946    LIB libspdk_event_vmd.a
00:05:45.946    LIB libspdk_event_vhost_blk.a
00:05:45.946    LIB libspdk_event_sock.a
00:05:46.204    SO libspdk_event_scheduler.so.4.0
00:05:46.204    LIB libspdk_event_iobuf.a
00:05:46.204    SO libspdk_event_keyring.so.1.0
00:05:46.204    SO libspdk_event_vfu_tgt.so.3.0
00:05:46.204    SO libspdk_event_vhost_blk.so.3.0
00:05:46.204    SO libspdk_event_fsdev.so.1.0
00:05:46.204    SO libspdk_event_vmd.so.6.0
00:05:46.204    SO libspdk_event_sock.so.5.0
00:05:46.204    SO libspdk_event_iobuf.so.3.0
00:05:46.204    SYMLINK libspdk_event_keyring.so
00:05:46.204    SYMLINK libspdk_event_scheduler.so
00:05:46.204    SYMLINK libspdk_event_vfu_tgt.so
00:05:46.204    SYMLINK libspdk_event_fsdev.so
00:05:46.204    SYMLINK libspdk_event_vhost_blk.so
00:05:46.204    SYMLINK libspdk_event_vmd.so
00:05:46.204    SYMLINK libspdk_event_sock.so
00:05:46.204    SYMLINK libspdk_event_iobuf.so
00:05:46.463    CC module/event/subsystems/accel/accel.o
00:05:46.721    LIB libspdk_event_accel.a
00:05:46.721    SO libspdk_event_accel.so.6.0
00:05:46.721    SYMLINK libspdk_event_accel.so
00:05:46.978    CC module/event/subsystems/bdev/bdev.o
00:05:47.236    LIB libspdk_event_bdev.a
00:05:47.237    SO libspdk_event_bdev.so.6.0
00:05:47.237    SYMLINK libspdk_event_bdev.so
00:05:47.495    CC module/event/subsystems/scsi/scsi.o
00:05:47.495    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:05:47.495    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:05:47.495    CC module/event/subsystems/nbd/nbd.o
00:05:47.495    CC module/event/subsystems/ublk/ublk.o
00:05:47.753    LIB libspdk_event_ublk.a
00:05:47.753    LIB libspdk_event_scsi.a
00:05:47.753    SO libspdk_event_scsi.so.6.0
00:05:47.753    SO libspdk_event_ublk.so.3.0
00:05:47.753    LIB libspdk_event_nbd.a
00:05:47.753    SO libspdk_event_nbd.so.6.0
00:05:47.753    SYMLINK libspdk_event_scsi.so
00:05:47.753    SYMLINK libspdk_event_ublk.so
00:05:48.010    SYMLINK libspdk_event_nbd.so
00:05:48.010    LIB libspdk_event_nvmf.a
00:05:48.010    SO libspdk_event_nvmf.so.6.0
00:05:48.010    SYMLINK libspdk_event_nvmf.so
00:05:48.268    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:05:48.268    CC module/event/subsystems/iscsi/iscsi.o
00:05:48.268    LIB libspdk_event_vhost_scsi.a
00:05:48.268    LIB libspdk_event_iscsi.a
00:05:48.268    SO libspdk_event_vhost_scsi.so.3.0
00:05:48.531    SO libspdk_event_iscsi.so.6.0
00:05:48.531    SYMLINK libspdk_event_vhost_scsi.so
00:05:48.531    SYMLINK libspdk_event_iscsi.so
00:05:48.531    SO libspdk.so.6.0
00:05:48.531    SYMLINK libspdk.so
00:05:48.844    CC app/trace_record/trace_record.o
00:05:48.844    CXX app/trace/trace.o
00:05:48.844    TEST_HEADER include/spdk/accel.h
00:05:48.844    TEST_HEADER include/spdk/accel_module.h
00:05:48.844    TEST_HEADER include/spdk/assert.h
00:05:48.844    TEST_HEADER include/spdk/barrier.h
00:05:48.844    TEST_HEADER include/spdk/base64.h
00:05:48.844    TEST_HEADER include/spdk/bdev.h
00:05:48.844    TEST_HEADER include/spdk/bdev_module.h
00:05:48.844    TEST_HEADER include/spdk/bdev_zone.h
00:05:48.844    TEST_HEADER include/spdk/bit_array.h
00:05:48.844    TEST_HEADER include/spdk/bit_pool.h
00:05:48.844    TEST_HEADER include/spdk/blob_bdev.h
00:05:48.844    TEST_HEADER include/spdk/blobfs_bdev.h
00:05:48.844    TEST_HEADER include/spdk/blobfs.h
00:05:48.844    TEST_HEADER include/spdk/blob.h
00:05:48.844    TEST_HEADER include/spdk/conf.h
00:05:48.844    TEST_HEADER include/spdk/config.h
00:05:48.844    TEST_HEADER include/spdk/cpuset.h
00:05:48.844    TEST_HEADER include/spdk/crc16.h
00:05:48.844    TEST_HEADER include/spdk/crc32.h
00:05:48.844    TEST_HEADER include/spdk/crc64.h
00:05:48.844    TEST_HEADER include/spdk/dif.h
00:05:48.844    TEST_HEADER include/spdk/dma.h
00:05:48.844    TEST_HEADER include/spdk/endian.h
00:05:48.844    TEST_HEADER include/spdk/env_dpdk.h
00:05:48.844    TEST_HEADER include/spdk/env.h
00:05:48.844    CC app/nvmf_tgt/nvmf_main.o
00:05:49.103    TEST_HEADER include/spdk/event.h
00:05:49.103    TEST_HEADER include/spdk/fd_group.h
00:05:49.103    TEST_HEADER include/spdk/fd.h
00:05:49.103    TEST_HEADER include/spdk/file.h
00:05:49.103    TEST_HEADER include/spdk/fsdev.h
00:05:49.103    TEST_HEADER include/spdk/fsdev_module.h
00:05:49.103    TEST_HEADER include/spdk/ftl.h
00:05:49.103    TEST_HEADER include/spdk/gpt_spec.h
00:05:49.103    TEST_HEADER include/spdk/hexlify.h
00:05:49.103    TEST_HEADER include/spdk/histogram_data.h
00:05:49.103    TEST_HEADER include/spdk/idxd.h
00:05:49.103    TEST_HEADER include/spdk/idxd_spec.h
00:05:49.103    TEST_HEADER include/spdk/init.h
00:05:49.103    TEST_HEADER include/spdk/ioat.h
00:05:49.103    TEST_HEADER include/spdk/ioat_spec.h
00:05:49.103    TEST_HEADER include/spdk/iscsi_spec.h
00:05:49.103    CC examples/util/zipf/zipf.o
00:05:49.103    TEST_HEADER include/spdk/json.h
00:05:49.103    TEST_HEADER include/spdk/jsonrpc.h
00:05:49.103    TEST_HEADER include/spdk/keyring.h
00:05:49.103    TEST_HEADER include/spdk/keyring_module.h
00:05:49.103    TEST_HEADER include/spdk/likely.h
00:05:49.103    TEST_HEADER include/spdk/log.h
00:05:49.103    TEST_HEADER include/spdk/lvol.h
00:05:49.103    CC test/thread/poller_perf/poller_perf.o
00:05:49.103    CC examples/ioat/perf/perf.o
00:05:49.103    TEST_HEADER include/spdk/md5.h
00:05:49.103    TEST_HEADER include/spdk/memory.h
00:05:49.103    TEST_HEADER include/spdk/mmio.h
00:05:49.103    TEST_HEADER include/spdk/nbd.h
00:05:49.103    TEST_HEADER include/spdk/net.h
00:05:49.103    TEST_HEADER include/spdk/notify.h
00:05:49.103    TEST_HEADER include/spdk/nvme.h
00:05:49.103    TEST_HEADER include/spdk/nvme_intel.h
00:05:49.103    TEST_HEADER include/spdk/nvme_ocssd.h
00:05:49.103    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:05:49.103    TEST_HEADER include/spdk/nvme_spec.h
00:05:49.103    TEST_HEADER include/spdk/nvme_zns.h
00:05:49.103    TEST_HEADER include/spdk/nvmf_cmd.h
00:05:49.103    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:05:49.103    TEST_HEADER include/spdk/nvmf.h
00:05:49.103    TEST_HEADER include/spdk/nvmf_spec.h
00:05:49.103    CC test/dma/test_dma/test_dma.o
00:05:49.103    TEST_HEADER include/spdk/nvmf_transport.h
00:05:49.103    TEST_HEADER include/spdk/opal.h
00:05:49.103    TEST_HEADER include/spdk/opal_spec.h
00:05:49.103    TEST_HEADER include/spdk/pci_ids.h
00:05:49.103    TEST_HEADER include/spdk/pipe.h
00:05:49.103    CC test/app/bdev_svc/bdev_svc.o
00:05:49.103    TEST_HEADER include/spdk/queue.h
00:05:49.103    TEST_HEADER include/spdk/reduce.h
00:05:49.103    TEST_HEADER include/spdk/rpc.h
00:05:49.103    TEST_HEADER include/spdk/scheduler.h
00:05:49.103    TEST_HEADER include/spdk/scsi.h
00:05:49.103    TEST_HEADER include/spdk/scsi_spec.h
00:05:49.103    TEST_HEADER include/spdk/sock.h
00:05:49.103    TEST_HEADER include/spdk/stdinc.h
00:05:49.103    TEST_HEADER include/spdk/string.h
00:05:49.103    TEST_HEADER include/spdk/thread.h
00:05:49.103    TEST_HEADER include/spdk/trace.h
00:05:49.103    TEST_HEADER include/spdk/trace_parser.h
00:05:49.103    TEST_HEADER include/spdk/tree.h
00:05:49.103    TEST_HEADER include/spdk/ublk.h
00:05:49.103    TEST_HEADER include/spdk/util.h
00:05:49.103    TEST_HEADER include/spdk/uuid.h
00:05:49.103    TEST_HEADER include/spdk/version.h
00:05:49.103    TEST_HEADER include/spdk/vfio_user_pci.h
00:05:49.103    TEST_HEADER include/spdk/vfio_user_spec.h
00:05:49.103    TEST_HEADER include/spdk/vhost.h
00:05:49.103    TEST_HEADER include/spdk/vmd.h
00:05:49.103    TEST_HEADER include/spdk/xor.h
00:05:49.103    TEST_HEADER include/spdk/zipf.h
00:05:49.103    CXX test/cpp_headers/accel.o
00:05:49.103    LINK spdk_trace_record
00:05:49.103    LINK nvmf_tgt
00:05:49.362    LINK zipf
00:05:49.362    LINK poller_perf
00:05:49.362    LINK bdev_svc
00:05:49.362    LINK ioat_perf
00:05:49.362    CXX test/cpp_headers/accel_module.o
00:05:49.362    LINK spdk_trace
00:05:49.362    CC test/app/histogram_perf/histogram_perf.o
00:05:49.620    CC test/app/jsoncat/jsoncat.o
00:05:49.621    CXX test/cpp_headers/assert.o
00:05:49.621    CC test/app/stub/stub.o
00:05:49.621    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:05:49.621    CC examples/ioat/verify/verify.o
00:05:49.621    LINK test_dma
00:05:49.621    LINK histogram_perf
00:05:49.621    LINK jsoncat
00:05:49.621    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:05:49.621    CXX test/cpp_headers/barrier.o
00:05:49.621    CC app/iscsi_tgt/iscsi_tgt.o
00:05:49.879    LINK stub
00:05:49.879    CXX test/cpp_headers/base64.o
00:05:49.879    CXX test/cpp_headers/bdev.o
00:05:49.879    LINK verify
00:05:49.879    CXX test/cpp_headers/bdev_module.o
00:05:49.879    LINK iscsi_tgt
00:05:49.879    CXX test/cpp_headers/bdev_zone.o
00:05:49.879    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:05:49.879    CXX test/cpp_headers/bit_array.o
00:05:50.138    LINK nvme_fuzz
00:05:50.138    CC examples/interrupt_tgt/interrupt_tgt.o
00:05:50.138    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:05:50.397    CXX test/cpp_headers/bit_pool.o
00:05:50.397    CXX test/cpp_headers/blob_bdev.o
00:05:50.397    CC examples/thread/thread/thread_ex.o
00:05:50.397    CC app/spdk_tgt/spdk_tgt.o
00:05:50.397    LINK interrupt_tgt
00:05:50.397    CXX test/cpp_headers/blobfs_bdev.o
00:05:50.656    CC test/event/event_perf/event_perf.o
00:05:50.656    CC test/env/vtophys/vtophys.o
00:05:50.656    LINK spdk_tgt
00:05:50.656    CC test/event/reactor/reactor.o
00:05:50.656    LINK thread
00:05:50.656    CC test/env/mem_callbacks/mem_callbacks.o
00:05:50.656    LINK vhost_fuzz
00:05:50.656    CXX test/cpp_headers/blobfs.o
00:05:50.656    LINK event_perf
00:05:50.656    LINK vtophys
00:05:50.656    LINK reactor
00:05:50.914    CXX test/cpp_headers/blob.o
00:05:50.914    CC app/spdk_lspci/spdk_lspci.o
00:05:50.914    CC app/spdk_nvme_perf/perf.o
00:05:50.914    CC test/rpc_client/rpc_client_test.o
00:05:50.914    CC test/event/reactor_perf/reactor_perf.o
00:05:50.914    CC app/spdk_nvme_identify/identify.o
00:05:51.173    CC test/nvme/aer/aer.o
00:05:51.173    LINK spdk_lspci
00:05:51.173    CXX test/cpp_headers/conf.o
00:05:51.173    LINK reactor_perf
00:05:51.173    LINK rpc_client_test
00:05:51.173    LINK mem_callbacks
00:05:51.173    CXX test/cpp_headers/config.o
00:05:51.173    CXX test/cpp_headers/cpuset.o
00:05:51.432    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:05:51.432    LINK aer
00:05:51.432    CC test/env/memory/memory_ut.o
00:05:51.432    CC test/event/app_repeat/app_repeat.o
00:05:51.432    LINK iscsi_fuzz
00:05:51.432    CC test/nvme/reset/reset.o
00:05:51.432    CXX test/cpp_headers/crc16.o
00:05:51.690    LINK env_dpdk_post_init
00:05:51.690    LINK app_repeat
00:05:51.690    CC app/spdk_nvme_discover/discovery_aer.o
00:05:51.690    CXX test/cpp_headers/crc32.o
00:05:51.948    LINK spdk_nvme_perf
00:05:51.948    CXX test/cpp_headers/crc64.o
00:05:51.948    LINK spdk_nvme_identify
00:05:51.948    LINK reset
00:05:51.948    CC app/spdk_top/spdk_top.o
00:05:51.948    LINK spdk_nvme_discover
00:05:51.948    CXX test/cpp_headers/dif.o
00:05:52.207    CC test/event/scheduler/scheduler.o
00:05:52.207    CC test/nvme/sgl/sgl.o
00:05:52.207    CXX test/cpp_headers/dma.o
00:05:52.207    CC app/vhost/vhost.o
00:05:52.207    CC test/accel/dif/dif.o
00:05:52.207    CC app/spdk_dd/spdk_dd.o
00:05:52.466    CC test/blobfs/mkfs/mkfs.o
00:05:52.466    CXX test/cpp_headers/endian.o
00:05:52.466    LINK scheduler
00:05:52.466    LINK vhost
00:05:52.466    LINK sgl
00:05:52.466    CXX test/cpp_headers/env_dpdk.o
00:05:52.724    LINK mkfs
00:05:52.724    CC test/nvme/e2edp/nvme_dp.o
00:05:52.724    LINK spdk_dd
00:05:52.724    LINK memory_ut
00:05:52.982    CXX test/cpp_headers/env.o
00:05:52.982    CXX test/cpp_headers/event.o
00:05:52.982    LINK spdk_top
00:05:52.982    CC app/fio/nvme/fio_plugin.o
00:05:52.982    CXX test/cpp_headers/fd_group.o
00:05:53.241    CXX test/cpp_headers/fd.o
00:05:53.241    LINK dif
00:05:53.241    CC test/lvol/esnap/esnap.o
00:05:53.241    LINK nvme_dp
00:05:53.241    CC app/fio/bdev/fio_plugin.o
00:05:53.241    CC test/env/pci/pci_ut.o
00:05:53.499    CXX test/cpp_headers/file.o
00:05:53.499    CXX test/cpp_headers/fsdev.o
00:05:53.499    CC examples/sock/hello_world/hello_sock.o
00:05:53.757    CXX test/cpp_headers/fsdev_module.o
00:05:53.757    CC test/nvme/overhead/overhead.o
00:05:53.757    CC examples/vmd/lsvmd/lsvmd.o
00:05:53.757    LINK pci_ut
00:05:54.015    LINK spdk_nvme
00:05:54.015    CC examples/idxd/perf/perf.o
00:05:54.015    LINK lsvmd
00:05:54.015    LINK hello_sock
00:05:54.015    CXX test/cpp_headers/ftl.o
00:05:54.015    CXX test/cpp_headers/gpt_spec.o
00:05:54.015    LINK spdk_bdev
00:05:54.015    LINK overhead
00:05:54.274    CXX test/cpp_headers/hexlify.o
00:05:54.274    LINK idxd_perf
00:05:54.274    CC examples/vmd/led/led.o
00:05:54.274    CC examples/fsdev/hello_world/hello_fsdev.o
00:05:54.534    CXX test/cpp_headers/histogram_data.o
00:05:54.534    CC test/bdev/bdevio/bdevio.o
00:05:54.534    CXX test/cpp_headers/idxd.o
00:05:54.534    CC test/nvme/err_injection/err_injection.o
00:05:54.534    CC examples/accel/perf/accel_perf.o
00:05:54.534    LINK led
00:05:54.534    CC examples/blob/hello_world/hello_blob.o
00:05:54.534    CXX test/cpp_headers/idxd_spec.o
00:05:54.792    LINK err_injection
00:05:54.792    CXX test/cpp_headers/init.o
00:05:54.792    LINK hello_fsdev
00:05:54.792    CC examples/blob/cli/blobcli.o
00:05:54.792    LINK hello_blob
00:05:54.792    LINK bdevio
00:05:54.792    CXX test/cpp_headers/ioat.o
00:05:55.051    LINK accel_perf
00:05:55.051    CC test/nvme/startup/startup.o
00:05:55.051    CC test/nvme/reserve/reserve.o
00:05:55.051    CXX test/cpp_headers/ioat_spec.o
00:05:55.051    CC examples/nvme/hello_world/hello_world.o
00:05:55.051    LINK startup
00:05:55.051    CC examples/nvme/reconnect/reconnect.o
00:05:55.051    CC examples/nvme/nvme_manage/nvme_manage.o
00:05:55.051    CC examples/nvme/arbitration/arbitration.o
00:05:55.310    CXX test/cpp_headers/iscsi_spec.o
00:05:55.310    LINK reserve
00:05:55.310    LINK blobcli
00:05:55.310    CC test/nvme/simple_copy/simple_copy.o
00:05:55.310    LINK hello_world
00:05:55.310    CXX test/cpp_headers/json.o
00:05:55.569    LINK reconnect
00:05:55.569    LINK arbitration
00:05:55.569    CXX test/cpp_headers/jsonrpc.o
00:05:55.569    CC examples/bdev/hello_world/hello_bdev.o
00:05:55.569    CXX test/cpp_headers/keyring.o
00:05:55.569    LINK simple_copy
00:05:55.569    CC examples/bdev/bdevperf/bdevperf.o
00:05:55.569    LINK nvme_manage
00:05:55.569    CC test/nvme/connect_stress/connect_stress.o
00:05:55.827    CXX test/cpp_headers/keyring_module.o
00:05:55.827    LINK hello_bdev
00:05:55.827    CC examples/nvme/hotplug/hotplug.o
00:05:55.827    LINK connect_stress
00:05:55.827    CC examples/nvme/cmb_copy/cmb_copy.o
00:05:55.827    CC examples/nvme/abort/abort.o
00:05:56.086    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:05:56.086    CXX test/cpp_headers/likely.o
00:05:56.086    CXX test/cpp_headers/log.o
00:05:56.086    CXX test/cpp_headers/lvol.o
00:05:56.086    LINK cmb_copy
00:05:56.086    LINK hotplug
00:05:56.086    CC test/nvme/boot_partition/boot_partition.o
00:05:56.086    LINK pmr_persistence
00:05:56.086    CXX test/cpp_headers/md5.o
00:05:56.344    CC test/nvme/compliance/nvme_compliance.o
00:05:56.344    LINK abort
00:05:56.344    LINK boot_partition
00:05:56.344    CC test/nvme/fused_ordering/fused_ordering.o
00:05:56.344    CC test/nvme/fdp/fdp.o
00:05:56.344    CC test/nvme/doorbell_aers/doorbell_aers.o
00:05:56.344    CC test/nvme/cuse/cuse.o
00:05:56.344    CXX test/cpp_headers/memory.o
00:05:56.344    CXX test/cpp_headers/mmio.o
00:05:56.344    CXX test/cpp_headers/nbd.o
00:05:56.344    CXX test/cpp_headers/net.o
00:05:56.344    LINK bdevperf
00:05:56.603    LINK fused_ordering
00:05:56.603    LINK doorbell_aers
00:05:56.603    LINK nvme_compliance
00:05:56.603    CXX test/cpp_headers/notify.o
00:05:56.603    CXX test/cpp_headers/nvme.o
00:05:56.603    CXX test/cpp_headers/nvme_intel.o
00:05:56.603    LINK fdp
00:05:56.603    CXX test/cpp_headers/nvme_ocssd.o
00:05:56.603    CXX test/cpp_headers/nvme_ocssd_spec.o
00:05:56.861    CXX test/cpp_headers/nvme_spec.o
00:05:56.861    CXX test/cpp_headers/nvme_zns.o
00:05:56.861    CXX test/cpp_headers/nvmf_cmd.o
00:05:56.861    CXX test/cpp_headers/nvmf_fc_spec.o
00:05:56.861    CXX test/cpp_headers/nvmf.o
00:05:56.861    CXX test/cpp_headers/nvmf_spec.o
00:05:56.861    CXX test/cpp_headers/nvmf_transport.o
00:05:56.861    CXX test/cpp_headers/opal.o
00:05:56.861    CXX test/cpp_headers/opal_spec.o
00:05:56.861    CC examples/nvmf/nvmf/nvmf.o
00:05:57.119    CXX test/cpp_headers/pci_ids.o
00:05:57.119    CXX test/cpp_headers/pipe.o
00:05:57.119    CXX test/cpp_headers/queue.o
00:05:57.119    CXX test/cpp_headers/reduce.o
00:05:57.119    CXX test/cpp_headers/rpc.o
00:05:57.119    CXX test/cpp_headers/scheduler.o
00:05:57.119    CXX test/cpp_headers/scsi.o
00:05:57.119    CXX test/cpp_headers/scsi_spec.o
00:05:57.119    CXX test/cpp_headers/sock.o
00:05:57.119    CXX test/cpp_headers/stdinc.o
00:05:57.377    CXX test/cpp_headers/string.o
00:05:57.377    LINK nvmf
00:05:57.377    CXX test/cpp_headers/thread.o
00:05:57.377    CXX test/cpp_headers/trace.o
00:05:57.377    CXX test/cpp_headers/trace_parser.o
00:05:57.377    CXX test/cpp_headers/tree.o
00:05:57.377    CXX test/cpp_headers/ublk.o
00:05:57.377    CXX test/cpp_headers/util.o
00:05:57.377    CXX test/cpp_headers/uuid.o
00:05:57.377    CXX test/cpp_headers/version.o
00:05:57.377    CXX test/cpp_headers/vfio_user_pci.o
00:05:57.377    CXX test/cpp_headers/vfio_user_spec.o
00:05:57.377    CXX test/cpp_headers/vhost.o
00:05:57.377    CXX test/cpp_headers/vmd.o
00:05:57.377    CXX test/cpp_headers/xor.o
00:05:57.635    CXX test/cpp_headers/zipf.o
00:05:57.635    LINK cuse
00:05:58.572    LINK esnap
00:05:58.831  
00:05:58.831  real	1m21.855s
00:05:58.831  user	6m58.177s
00:05:58.831  sys	1m20.177s
00:05:58.831   18:50:30 make -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:05:58.831   18:50:30 make -- common/autotest_common.sh@10 -- $ set +x
00:05:58.831  ************************************
00:05:58.831  END TEST make
00:05:58.831  ************************************
00:05:58.831   18:50:30  -- spdk/autobuild.sh@1 -- $ stop_monitor_resources
00:05:58.831   18:50:30  -- pm/common@29 -- $ signal_monitor_resources TERM
00:05:58.831   18:50:30  -- pm/common@40 -- $ local monitor pid pids signal=TERM
00:05:58.831   18:50:30  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:05:58.831   18:50:30  -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]]
00:05:58.831   18:50:30  -- pm/common@44 -- $ pid=6032
00:05:58.831   18:50:30  -- pm/common@50 -- $ kill -TERM 6032
00:05:58.831   18:50:30  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:05:58.831   18:50:30  -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]]
00:05:58.831   18:50:30  -- pm/common@44 -- $ pid=6034
00:05:58.831   18:50:30  -- pm/common@50 -- $ kill -TERM 6034
00:05:58.831   18:50:30  -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 ))
00:05:58.831   18:50:30  -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:05:58.831    18:50:30  -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:58.831     18:50:30  -- common/autotest_common.sh@1711 -- # lcov --version
00:05:58.831     18:50:30  -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:58.831    18:50:30  -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:58.831    18:50:30  -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:58.831    18:50:30  -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:58.831    18:50:30  -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:58.831    18:50:30  -- scripts/common.sh@336 -- # IFS=.-:
00:05:58.831    18:50:30  -- scripts/common.sh@336 -- # read -ra ver1
00:05:58.831    18:50:30  -- scripts/common.sh@337 -- # IFS=.-:
00:05:58.831    18:50:30  -- scripts/common.sh@337 -- # read -ra ver2
00:05:58.831    18:50:30  -- scripts/common.sh@338 -- # local 'op=<'
00:05:58.831    18:50:30  -- scripts/common.sh@340 -- # ver1_l=2
00:05:58.831    18:50:30  -- scripts/common.sh@341 -- # ver2_l=1
00:05:58.831    18:50:30  -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:58.831    18:50:30  -- scripts/common.sh@344 -- # case "$op" in
00:05:58.831    18:50:30  -- scripts/common.sh@345 -- # : 1
00:05:58.831    18:50:30  -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:58.831    18:50:30  -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:58.831     18:50:30  -- scripts/common.sh@365 -- # decimal 1
00:05:58.831     18:50:30  -- scripts/common.sh@353 -- # local d=1
00:05:58.831     18:50:30  -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:58.831     18:50:30  -- scripts/common.sh@355 -- # echo 1
00:05:58.831    18:50:30  -- scripts/common.sh@365 -- # ver1[v]=1
00:05:58.831     18:50:30  -- scripts/common.sh@366 -- # decimal 2
00:05:58.831     18:50:30  -- scripts/common.sh@353 -- # local d=2
00:05:58.831     18:50:30  -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:58.831     18:50:30  -- scripts/common.sh@355 -- # echo 2
00:05:58.831    18:50:30  -- scripts/common.sh@366 -- # ver2[v]=2
00:05:58.831    18:50:30  -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:58.831    18:50:30  -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:58.831    18:50:30  -- scripts/common.sh@368 -- # return 0
00:05:58.831    18:50:30  -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:58.831    18:50:30  -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:58.831  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:58.831  		--rc genhtml_branch_coverage=1
00:05:58.831  		--rc genhtml_function_coverage=1
00:05:58.831  		--rc genhtml_legend=1
00:05:58.831  		--rc geninfo_all_blocks=1
00:05:58.831  		--rc geninfo_unexecuted_blocks=1
00:05:58.831  		
00:05:58.831  		'
00:05:58.831    18:50:30  -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:58.831  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:58.831  		--rc genhtml_branch_coverage=1
00:05:58.831  		--rc genhtml_function_coverage=1
00:05:58.831  		--rc genhtml_legend=1
00:05:58.831  		--rc geninfo_all_blocks=1
00:05:58.831  		--rc geninfo_unexecuted_blocks=1
00:05:58.831  		
00:05:58.831  		'
00:05:58.831    18:50:30  -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:58.831  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:58.831  		--rc genhtml_branch_coverage=1
00:05:58.831  		--rc genhtml_function_coverage=1
00:05:58.831  		--rc genhtml_legend=1
00:05:58.831  		--rc geninfo_all_blocks=1
00:05:58.831  		--rc geninfo_unexecuted_blocks=1
00:05:58.831  		
00:05:58.831  		'
00:05:58.831    18:50:30  -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:58.831  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:58.831  		--rc genhtml_branch_coverage=1
00:05:58.831  		--rc genhtml_function_coverage=1
00:05:58.831  		--rc genhtml_legend=1
00:05:58.831  		--rc geninfo_all_blocks=1
00:05:58.831  		--rc geninfo_unexecuted_blocks=1
00:05:58.831  		
00:05:58.831  		'
00:05:58.831   18:50:30  -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:05:58.831     18:50:30  -- nvmf/common.sh@7 -- # uname -s
00:05:59.090    18:50:30  -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:05:59.090    18:50:30  -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:05:59.090    18:50:30  -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:05:59.090    18:50:30  -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:05:59.090    18:50:30  -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:05:59.090    18:50:30  -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:05:59.090    18:50:30  -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:05:59.090    18:50:30  -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:05:59.090    18:50:30  -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:05:59.090     18:50:30  -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:05:59.090    18:50:30  -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:05:59.090    18:50:30  -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:05:59.090    18:50:30  -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:05:59.090    18:50:30  -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:05:59.090    18:50:30  -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:05:59.090    18:50:30  -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:05:59.090    18:50:30  -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:05:59.090     18:50:30  -- scripts/common.sh@15 -- # shopt -s extglob
00:05:59.090     18:50:30  -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:05:59.090     18:50:30  -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:05:59.090     18:50:30  -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:05:59.090      18:50:30  -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:59.090      18:50:30  -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:59.090      18:50:30  -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:59.090      18:50:30  -- paths/export.sh@5 -- # export PATH
00:05:59.090      18:50:30  -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:59.090    18:50:30  -- nvmf/common.sh@51 -- # : 0
00:05:59.090    18:50:30  -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:05:59.090    18:50:30  -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:05:59.090    18:50:30  -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:05:59.090    18:50:30  -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:05:59.090    18:50:30  -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:05:59.090    18:50:30  -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:05:59.090  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:05:59.090    18:50:30  -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:05:59.090    18:50:30  -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:05:59.090    18:50:30  -- nvmf/common.sh@55 -- # have_pci_nics=0
00:05:59.090   18:50:30  -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:05:59.090    18:50:30  -- spdk/autotest.sh@32 -- # uname -s
00:05:59.090   18:50:30  -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']'
00:05:59.090   18:50:30  -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h'
00:05:59.090   18:50:30  -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps
00:05:59.090   18:50:30  -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t'
00:05:59.090   18:50:30  -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps
00:05:59.090   18:50:30  -- spdk/autotest.sh@44 -- # modprobe nbd
00:05:59.090    18:50:30  -- spdk/autotest.sh@46 -- # type -P udevadm
00:05:59.090   18:50:30  -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm
00:05:59.090   18:50:30  -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property
00:05:59.090   18:50:30  -- spdk/autotest.sh@48 -- # udevadm_pid=71015
00:05:59.091   18:50:30  -- spdk/autotest.sh@53 -- # start_monitor_resources
00:05:59.091   18:50:30  -- pm/common@17 -- # local monitor
00:05:59.091   18:50:30  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:05:59.091   18:50:30  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:05:59.091    18:50:30  -- pm/common@21 -- # date +%s
00:05:59.091   18:50:30  -- pm/common@25 -- # sleep 1
00:05:59.091    18:50:30  -- pm/common@21 -- # date +%s
00:05:59.091   18:50:30  -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734115830
00:05:59.091   18:50:30  -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734115830
00:05:59.091  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734115830_collect-cpu-load.pm.log
00:05:59.091  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734115830_collect-vmstat.pm.log
00:06:00.027   18:50:31  -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:06:00.027   18:50:31  -- spdk/autotest.sh@57 -- # timing_enter autotest
00:06:00.027   18:50:31  -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:00.027   18:50:31  -- common/autotest_common.sh@10 -- # set +x
00:06:00.027   18:50:31  -- spdk/autotest.sh@59 -- # create_test_list
00:06:00.027   18:50:31  -- common/autotest_common.sh@752 -- # xtrace_disable
00:06:00.027   18:50:31  -- common/autotest_common.sh@10 -- # set +x
00:06:00.027     18:50:31  -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh
00:06:00.027    18:50:31  -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk
00:06:00.027   18:50:31  -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk
00:06:00.027   18:50:31  -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output
00:06:00.027   18:50:31  -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk
00:06:00.027   18:50:31  -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod
00:06:00.027    18:50:31  -- common/autotest_common.sh@1457 -- # uname
00:06:00.027   18:50:31  -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']'
00:06:00.027   18:50:31  -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf
00:06:00.027    18:50:31  -- common/autotest_common.sh@1477 -- # uname
00:06:00.027   18:50:31  -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]]
00:06:00.027   18:50:31  -- spdk/autotest.sh@68 -- # [[ y == y ]]
00:06:00.027   18:50:31  -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version
00:06:00.285  lcov: LCOV version 1.15
00:06:00.285   18:50:31  -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info
00:06:15.170  /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found
00:06:15.170  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno
00:06:30.048   18:51:00  -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup
00:06:30.048   18:51:00  -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:30.048   18:51:00  -- common/autotest_common.sh@10 -- # set +x
00:06:30.048   18:51:00  -- spdk/autotest.sh@78 -- # rm -f
00:06:30.048   18:51:00  -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:06:30.048  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:06:30.048  0000:00:11.0 (1b36 0010): Already using the nvme driver
00:06:30.048  0000:00:10.0 (1b36 0010): Already using the nvme driver
00:06:30.048   18:51:01  -- spdk/autotest.sh@83 -- # get_zoned_devs
00:06:30.048   18:51:01  -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:06:30.048   18:51:01  -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:06:30.048   18:51:01  -- common/autotest_common.sh@1658 -- # zoned_ctrls=()
00:06:30.048   18:51:01  -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls
00:06:30.048   18:51:01  -- common/autotest_common.sh@1659 -- # local nvme bdf ns
00:06:30.048   18:51:01  -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:06:30.048   18:51:01  -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0
00:06:30.048   18:51:01  -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:06:30.048   18:51:01  -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1
00:06:30.048   18:51:01  -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:06:30.048   18:51:01  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:06:30.048   18:51:01  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:06:30.048   18:51:01  -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:06:30.048   18:51:01  -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0
00:06:30.048   18:51:01  -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:06:30.048   18:51:01  -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1
00:06:30.048   18:51:01  -- common/autotest_common.sh@1650 -- # local device=nvme1n1
00:06:30.048   18:51:01  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]]
00:06:30.048   18:51:01  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:06:30.048   18:51:01  -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:06:30.048   18:51:01  -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2
00:06:30.048   18:51:01  -- common/autotest_common.sh@1650 -- # local device=nvme1n2
00:06:30.048   18:51:01  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]]
00:06:30.048   18:51:01  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:06:30.048   18:51:01  -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:06:30.048   18:51:01  -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3
00:06:30.048   18:51:01  -- common/autotest_common.sh@1650 -- # local device=nvme1n3
00:06:30.048   18:51:01  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]]
00:06:30.048   18:51:01  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:06:30.048   18:51:01  -- spdk/autotest.sh@85 -- # (( 0 > 0 ))
00:06:30.048   18:51:01  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:06:30.048   18:51:01  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:06:30.048   18:51:01  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1
00:06:30.048   18:51:01  -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt
00:06:30.048   18:51:01  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:06:30.048  No valid GPT data, bailing
00:06:30.048    18:51:01  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:06:30.048   18:51:01  -- scripts/common.sh@394 -- # pt=
00:06:30.048   18:51:01  -- scripts/common.sh@395 -- # return 1
00:06:30.048   18:51:01  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1
00:06:30.048  1+0 records in
00:06:30.048  1+0 records out
00:06:30.048  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00451562 s, 232 MB/s
00:06:30.048   18:51:01  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:06:30.048   18:51:01  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:06:30.048   18:51:01  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1
00:06:30.048   18:51:01  -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt
00:06:30.048   18:51:01  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1
00:06:30.048  No valid GPT data, bailing
00:06:30.048    18:51:01  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1
00:06:30.048   18:51:01  -- scripts/common.sh@394 -- # pt=
00:06:30.048   18:51:01  -- scripts/common.sh@395 -- # return 1
00:06:30.048   18:51:01  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1
00:06:30.048  1+0 records in
00:06:30.048  1+0 records out
00:06:30.048  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00481488 s, 218 MB/s
00:06:30.048   18:51:01  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:06:30.048   18:51:01  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:06:30.048   18:51:01  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2
00:06:30.048   18:51:01  -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt
00:06:30.048   18:51:01  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2
00:06:30.048  No valid GPT data, bailing
00:06:30.048    18:51:01  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2
00:06:30.048   18:51:01  -- scripts/common.sh@394 -- # pt=
00:06:30.048   18:51:01  -- scripts/common.sh@395 -- # return 1
00:06:30.048   18:51:01  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1
00:06:30.048  1+0 records in
00:06:30.048  1+0 records out
00:06:30.048  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513535 s, 204 MB/s
00:06:30.048   18:51:01  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:06:30.048   18:51:01  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:06:30.048   18:51:01  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3
00:06:30.048   18:51:01  -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt
00:06:30.048   18:51:01  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3
00:06:30.048  No valid GPT data, bailing
00:06:30.048    18:51:01  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3
00:06:30.048   18:51:01  -- scripts/common.sh@394 -- # pt=
00:06:30.048   18:51:01  -- scripts/common.sh@395 -- # return 1
00:06:30.048   18:51:01  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1
00:06:30.048  1+0 records in
00:06:30.048  1+0 records out
00:06:30.048  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00462006 s, 227 MB/s
00:06:30.048   18:51:01  -- spdk/autotest.sh@105 -- # sync
00:06:30.048   18:51:01  -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes
00:06:30.048   18:51:01  -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:06:30.048    18:51:01  -- common/autotest_common.sh@22 -- # reap_spdk_processes
00:06:32.582    18:51:03  -- spdk/autotest.sh@111 -- # uname -s
00:06:32.582   18:51:03  -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]]
00:06:32.582   18:51:03  -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]]
00:06:32.582   18:51:03  -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:06:32.840  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:06:32.840  Hugepages
00:06:32.840  node     hugesize     free /  total
00:06:32.840  node0   1048576kB        0 /      0
00:06:32.840  node0      2048kB        0 /      0
00:06:32.840  
00:06:32.840  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:06:32.840  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:06:33.099  NVMe                      0000:00:10.0    1b36   0010   unknown nvme             nvme0      nvme0n1
00:06:33.099  NVMe                      0000:00:11.0    1b36   0010   unknown nvme             nvme1      nvme1n1 nvme1n2 nvme1n3
00:06:33.099    18:51:04  -- spdk/autotest.sh@117 -- # uname -s
00:06:33.099   18:51:04  -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]]
00:06:33.099   18:51:04  -- spdk/autotest.sh@119 -- # nvme_namespace_revert
00:06:33.099   18:51:04  -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:06:33.666  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:06:33.933  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:06:33.933  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:06:33.933   18:51:05  -- common/autotest_common.sh@1517 -- # sleep 1
00:06:34.881   18:51:06  -- common/autotest_common.sh@1518 -- # bdfs=()
00:06:34.881   18:51:06  -- common/autotest_common.sh@1518 -- # local bdfs
00:06:34.881   18:51:06  -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:06:34.881    18:51:06  -- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:06:34.881    18:51:06  -- common/autotest_common.sh@1498 -- # bdfs=()
00:06:34.881    18:51:06  -- common/autotest_common.sh@1498 -- # local bdfs
00:06:34.881    18:51:06  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:06:35.140     18:51:06  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:06:35.140     18:51:06  -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:06:35.140    18:51:06  -- common/autotest_common.sh@1500 -- # (( 2 == 0 ))
00:06:35.140    18:51:06  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0
00:06:35.140   18:51:06  -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:06:35.398  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:06:35.398  Waiting for block devices as requested
00:06:35.398  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:06:35.656  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:06:35.656   18:51:07  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:06:35.656    18:51:07  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0
00:06:35.656     18:51:07  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1
00:06:35.656     18:51:07  -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme
00:06:35.656    18:51:07  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1
00:06:35.656    18:51:07  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]]
00:06:35.656     18:51:07  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1
00:06:35.656    18:51:07  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1
00:06:35.656   18:51:07  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1
00:06:35.656   18:51:07  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]]
00:06:35.656    18:51:07  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1
00:06:35.656    18:51:07  -- common/autotest_common.sh@1531 -- # grep oacs
00:06:35.656    18:51:07  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:06:35.656   18:51:07  -- common/autotest_common.sh@1531 -- # oacs=' 0x12a'
00:06:35.656   18:51:07  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:06:35.656   18:51:07  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:06:35.656    18:51:07  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1
00:06:35.656    18:51:07  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:06:35.656    18:51:07  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:06:35.656   18:51:07  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:06:35.656   18:51:07  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:06:35.656   18:51:07  -- common/autotest_common.sh@1543 -- # continue
00:06:35.656   18:51:07  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:06:35.656    18:51:07  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0
00:06:35.656     18:51:07  -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme
00:06:35.656     18:51:07  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1
00:06:35.656    18:51:07  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0
00:06:35.656    18:51:07  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]]
00:06:35.656     18:51:07  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0
00:06:35.656    18:51:07  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0
00:06:35.657   18:51:07  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0
00:06:35.657   18:51:07  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]]
00:06:35.657    18:51:07  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0
00:06:35.657    18:51:07  -- common/autotest_common.sh@1531 -- # grep oacs
00:06:35.657    18:51:07  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:06:35.657   18:51:07  -- common/autotest_common.sh@1531 -- # oacs=' 0x12a'
00:06:35.657   18:51:07  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:06:35.657   18:51:07  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:06:35.657    18:51:07  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0
00:06:35.657    18:51:07  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:06:35.657    18:51:07  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:06:35.657   18:51:07  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:06:35.657   18:51:07  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:06:35.657   18:51:07  -- common/autotest_common.sh@1543 -- # continue
00:06:35.657   18:51:07  -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup
00:06:35.657   18:51:07  -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:35.657   18:51:07  -- common/autotest_common.sh@10 -- # set +x
00:06:35.657   18:51:07  -- spdk/autotest.sh@125 -- # timing_enter afterboot
00:06:35.657   18:51:07  -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:35.657   18:51:07  -- common/autotest_common.sh@10 -- # set +x
00:06:35.657   18:51:07  -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:06:36.593  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:06:36.593  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:06:36.593  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:06:36.593   18:51:08  -- spdk/autotest.sh@127 -- # timing_exit afterboot
00:06:36.593   18:51:08  -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:36.593   18:51:08  -- common/autotest_common.sh@10 -- # set +x
00:06:36.593   18:51:08  -- spdk/autotest.sh@131 -- # opal_revert_cleanup
00:06:36.593   18:51:08  -- common/autotest_common.sh@1578 -- # mapfile -t bdfs
00:06:36.593    18:51:08  -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54
00:06:36.593    18:51:08  -- common/autotest_common.sh@1563 -- # bdfs=()
00:06:36.593    18:51:08  -- common/autotest_common.sh@1563 -- # _bdfs=()
00:06:36.593    18:51:08  -- common/autotest_common.sh@1563 -- # local bdfs _bdfs
00:06:36.593    18:51:08  -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs))
00:06:36.593     18:51:08  -- common/autotest_common.sh@1564 -- # get_nvme_bdfs
00:06:36.593     18:51:08  -- common/autotest_common.sh@1498 -- # bdfs=()
00:06:36.593     18:51:08  -- common/autotest_common.sh@1498 -- # local bdfs
00:06:36.593     18:51:08  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:06:36.593      18:51:08  -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:06:36.593      18:51:08  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:06:36.852     18:51:08  -- common/autotest_common.sh@1500 -- # (( 2 == 0 ))
00:06:36.852     18:51:08  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0
00:06:36.852    18:51:08  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:06:36.852     18:51:08  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device
00:06:36.852    18:51:08  -- common/autotest_common.sh@1566 -- # device=0x0010
00:06:36.852    18:51:08  -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:06:36.852    18:51:08  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:06:36.852     18:51:08  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device
00:06:36.852    18:51:08  -- common/autotest_common.sh@1566 -- # device=0x0010
00:06:36.852    18:51:08  -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:06:36.852    18:51:08  -- common/autotest_common.sh@1572 -- # (( 0 > 0 ))
00:06:36.852    18:51:08  -- common/autotest_common.sh@1572 -- # return 0
00:06:36.852   18:51:08  -- common/autotest_common.sh@1579 -- # [[ -z '' ]]
00:06:36.852   18:51:08  -- common/autotest_common.sh@1580 -- # return 0
00:06:36.852   18:51:08  -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']'
00:06:36.852   18:51:08  -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']'
00:06:36.852   18:51:08  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:06:36.852   18:51:08  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:06:36.852   18:51:08  -- spdk/autotest.sh@149 -- # timing_enter lib
00:06:36.852   18:51:08  -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:36.852   18:51:08  -- common/autotest_common.sh@10 -- # set +x
00:06:36.852   18:51:08  -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]]
00:06:36.852   18:51:08  -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:06:36.852   18:51:08  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:36.852   18:51:08  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:36.852   18:51:08  -- common/autotest_common.sh@10 -- # set +x
00:06:36.852  ************************************
00:06:36.852  START TEST env
00:06:36.852  ************************************
00:06:36.852   18:51:08 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:06:36.852  * Looking for test storage...
00:06:36.852  * Found test storage at /home/vagrant/spdk_repo/spdk/test/env
00:06:36.852    18:51:08 env -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:06:36.852     18:51:08 env -- common/autotest_common.sh@1711 -- # lcov --version
00:06:36.852     18:51:08 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:06:36.852    18:51:08 env -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:06:36.852    18:51:08 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:36.852    18:51:08 env -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:36.852    18:51:08 env -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:36.852    18:51:08 env -- scripts/common.sh@336 -- # IFS=.-:
00:06:36.852    18:51:08 env -- scripts/common.sh@336 -- # read -ra ver1
00:06:36.852    18:51:08 env -- scripts/common.sh@337 -- # IFS=.-:
00:06:36.852    18:51:08 env -- scripts/common.sh@337 -- # read -ra ver2
00:06:36.852    18:51:08 env -- scripts/common.sh@338 -- # local 'op=<'
00:06:36.852    18:51:08 env -- scripts/common.sh@340 -- # ver1_l=2
00:06:36.852    18:51:08 env -- scripts/common.sh@341 -- # ver2_l=1
00:06:36.852    18:51:08 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:36.853    18:51:08 env -- scripts/common.sh@344 -- # case "$op" in
00:06:36.853    18:51:08 env -- scripts/common.sh@345 -- # : 1
00:06:36.853    18:51:08 env -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:36.853    18:51:08 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:36.853     18:51:08 env -- scripts/common.sh@365 -- # decimal 1
00:06:36.853     18:51:08 env -- scripts/common.sh@353 -- # local d=1
00:06:36.853     18:51:08 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:36.853     18:51:08 env -- scripts/common.sh@355 -- # echo 1
00:06:36.853    18:51:08 env -- scripts/common.sh@365 -- # ver1[v]=1
00:06:36.853     18:51:08 env -- scripts/common.sh@366 -- # decimal 2
00:06:36.853     18:51:08 env -- scripts/common.sh@353 -- # local d=2
00:06:36.853     18:51:08 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:36.853     18:51:08 env -- scripts/common.sh@355 -- # echo 2
00:06:36.853    18:51:08 env -- scripts/common.sh@366 -- # ver2[v]=2
00:06:36.853    18:51:08 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:36.853    18:51:08 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:36.853    18:51:08 env -- scripts/common.sh@368 -- # return 0
00:06:36.853    18:51:08 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:36.853    18:51:08 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:06:36.853  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:36.853  		--rc genhtml_branch_coverage=1
00:06:36.853  		--rc genhtml_function_coverage=1
00:06:36.853  		--rc genhtml_legend=1
00:06:36.853  		--rc geninfo_all_blocks=1
00:06:36.853  		--rc geninfo_unexecuted_blocks=1
00:06:36.853  		
00:06:36.853  		'
00:06:36.853    18:51:08 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:06:36.853  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:36.853  		--rc genhtml_branch_coverage=1
00:06:36.853  		--rc genhtml_function_coverage=1
00:06:36.853  		--rc genhtml_legend=1
00:06:36.853  		--rc geninfo_all_blocks=1
00:06:36.853  		--rc geninfo_unexecuted_blocks=1
00:06:36.853  		
00:06:36.853  		'
00:06:36.853    18:51:08 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:06:36.853  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:36.853  		--rc genhtml_branch_coverage=1
00:06:36.853  		--rc genhtml_function_coverage=1
00:06:36.853  		--rc genhtml_legend=1
00:06:36.853  		--rc geninfo_all_blocks=1
00:06:36.853  		--rc geninfo_unexecuted_blocks=1
00:06:36.853  		
00:06:36.853  		'
00:06:36.853    18:51:08 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:06:36.853  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:36.853  		--rc genhtml_branch_coverage=1
00:06:36.853  		--rc genhtml_function_coverage=1
00:06:36.853  		--rc genhtml_legend=1
00:06:36.853  		--rc geninfo_all_blocks=1
00:06:36.853  		--rc geninfo_unexecuted_blocks=1
00:06:36.853  		
00:06:36.853  		'
00:06:36.853   18:51:08 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:06:36.853   18:51:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:36.853   18:51:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:36.853   18:51:08 env -- common/autotest_common.sh@10 -- # set +x
00:06:36.853  ************************************
00:06:36.853  START TEST env_memory
00:06:36.853  ************************************
00:06:36.853   18:51:08 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:06:37.112  
00:06:37.112  
00:06:37.112       CUnit - A unit testing framework for C - Version 2.1-3
00:06:37.112       http://cunit.sourceforge.net/
00:06:37.112  
00:06:37.112  
00:06:37.112  Suite: memory
00:06:37.112    Test: alloc and free memory map ...[2024-12-13 18:51:08.713888] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:06:37.112  passed
00:06:37.112    Test: mem map translation ...[2024-12-13 18:51:08.745375] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:06:37.112  [2024-12-13 18:51:08.745420] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:06:37.112  [2024-12-13 18:51:08.745476] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:06:37.112  [2024-12-13 18:51:08.745487] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:06:37.112  passed
00:06:37.112    Test: mem map registration ...[2024-12-13 18:51:08.809097] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234
00:06:37.112  [2024-12-13 18:51:08.809127] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152
00:06:37.112  passed
00:06:37.112    Test: mem map adjacent registrations ...passed
00:06:37.112  
00:06:37.112  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:37.112                suites      1      1    n/a      0        0
00:06:37.112                 tests      4      4      4      0        0
00:06:37.112               asserts    152    152    152      0      n/a
00:06:37.112  
00:06:37.112  Elapsed time =    0.213 seconds
00:06:37.112  
00:06:37.112  real	0m0.232s
00:06:37.112  user	0m0.214s
00:06:37.112  sys	0m0.014s
00:06:37.112   18:51:08 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:37.112   18:51:08 env.env_memory -- common/autotest_common.sh@10 -- # set +x
00:06:37.112  ************************************
00:06:37.112  END TEST env_memory
00:06:37.112  ************************************
00:06:37.371   18:51:08 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:06:37.371   18:51:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:37.371   18:51:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:37.371   18:51:08 env -- common/autotest_common.sh@10 -- # set +x
00:06:37.371  ************************************
00:06:37.371  START TEST env_vtophys
00:06:37.371  ************************************
00:06:37.371   18:51:08 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:06:37.371  EAL: lib.eal log level changed from notice to debug
00:06:37.371  EAL: Detected lcore 0 as core 0 on socket 0
00:06:37.371  EAL: Detected lcore 1 as core 0 on socket 0
00:06:37.371  EAL: Detected lcore 2 as core 0 on socket 0
00:06:37.371  EAL: Detected lcore 3 as core 0 on socket 0
00:06:37.371  EAL: Detected lcore 4 as core 0 on socket 0
00:06:37.371  EAL: Detected lcore 5 as core 0 on socket 0
00:06:37.371  EAL: Detected lcore 6 as core 0 on socket 0
00:06:37.371  EAL: Detected lcore 7 as core 0 on socket 0
00:06:37.371  EAL: Detected lcore 8 as core 0 on socket 0
00:06:37.371  EAL: Detected lcore 9 as core 0 on socket 0
00:06:37.371  EAL: Maximum logical cores by configuration: 128
00:06:37.371  EAL: Detected CPU lcores: 10
00:06:37.371  EAL: Detected NUMA nodes: 1
00:06:37.371  EAL: Checking presence of .so 'librte_eal.so.24.0'
00:06:37.371  EAL: Detected shared linkage of DPDK
00:06:37.371  EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0
00:06:37.371  EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0
00:06:37.371  EAL: Registered [vdev] bus.
00:06:37.371  EAL: bus.vdev log level changed from disabled to notice
00:06:37.371  EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0
00:06:37.371  EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0
00:06:37.371  EAL: pmd.net.i40e.init log level changed from disabled to notice
00:06:37.371  EAL: pmd.net.i40e.driver log level changed from disabled to notice
00:06:37.371  EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so
00:06:37.371  EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so
00:06:37.371  EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so
00:06:37.371  EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so
00:06:37.371  EAL: No shared files mode enabled, IPC will be disabled
00:06:37.371  EAL: No shared files mode enabled, IPC is disabled
00:06:37.371  EAL: Selected IOVA mode 'PA'
00:06:37.371  EAL: Probing VFIO support...
00:06:37.371  EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
00:06:37.371  EAL: VFIO modules not loaded, skipping VFIO support...
00:06:37.371  EAL: Ask a virtual area of 0x2e000 bytes
00:06:37.371  EAL: Virtual area found at 0x200000000000 (size = 0x2e000)
00:06:37.371  EAL: Setting up physically contiguous memory...
00:06:37.371  EAL: Setting maximum number of open files to 524288
00:06:37.371  EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
00:06:37.371  EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
00:06:37.371  EAL: Ask a virtual area of 0x61000 bytes
00:06:37.371  EAL: Virtual area found at 0x20000002e000 (size = 0x61000)
00:06:37.371  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:37.371  EAL: Ask a virtual area of 0x400000000 bytes
00:06:37.371  EAL: Virtual area found at 0x200000200000 (size = 0x400000000)
00:06:37.371  EAL: VA reserved for memseg list at 0x200000200000, size 400000000
00:06:37.371  EAL: Ask a virtual area of 0x61000 bytes
00:06:37.371  EAL: Virtual area found at 0x200400200000 (size = 0x61000)
00:06:37.371  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:37.371  EAL: Ask a virtual area of 0x400000000 bytes
00:06:37.371  EAL: Virtual area found at 0x200400400000 (size = 0x400000000)
00:06:37.371  EAL: VA reserved for memseg list at 0x200400400000, size 400000000
00:06:37.371  EAL: Ask a virtual area of 0x61000 bytes
00:06:37.371  EAL: Virtual area found at 0x200800400000 (size = 0x61000)
00:06:37.371  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:37.371  EAL: Ask a virtual area of 0x400000000 bytes
00:06:37.371  EAL: Virtual area found at 0x200800600000 (size = 0x400000000)
00:06:37.371  EAL: VA reserved for memseg list at 0x200800600000, size 400000000
00:06:37.371  EAL: Ask a virtual area of 0x61000 bytes
00:06:37.371  EAL: Virtual area found at 0x200c00600000 (size = 0x61000)
00:06:37.371  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:37.371  EAL: Ask a virtual area of 0x400000000 bytes
00:06:37.371  EAL: Virtual area found at 0x200c00800000 (size = 0x400000000)
00:06:37.371  EAL: VA reserved for memseg list at 0x200c00800000, size 400000000
00:06:37.371  EAL: Hugepages will be freed exactly as allocated.
00:06:37.371  EAL: No shared files mode enabled, IPC is disabled
00:06:37.371  EAL: No shared files mode enabled, IPC is disabled
00:06:37.371  EAL: TSC frequency is ~2200000 KHz
00:06:37.371  EAL: Main lcore 0 is ready (tid=7f28a2647a00;cpuset=[0])
00:06:37.371  EAL: Trying to obtain current memory policy.
00:06:37.371  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:37.371  EAL: Restoring previous memory policy: 0
00:06:37.371  EAL: request: mp_malloc_sync
00:06:37.371  EAL: No shared files mode enabled, IPC is disabled
00:06:37.371  EAL: Heap on socket 0 was expanded by 2MB
00:06:37.371  EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
00:06:37.371  EAL: No shared files mode enabled, IPC is disabled
00:06:37.371  EAL: No PCI address specified using 'addr=<id>' in: bus=pci
00:06:37.371  EAL: Mem event callback 'spdk:(nil)' registered
00:06:37.371  EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory)
00:06:37.371  
00:06:37.371  
00:06:37.371       CUnit - A unit testing framework for C - Version 2.1-3
00:06:37.371       http://cunit.sourceforge.net/
00:06:37.371  
00:06:37.371  
00:06:37.371  Suite: components_suite
00:06:37.371    Test: vtophys_malloc_test ...passed
00:06:37.371    Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy.
00:06:37.371  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:37.372  EAL: Restoring previous memory policy: 4
00:06:37.372  EAL: Calling mem event callback 'spdk:(nil)'
00:06:37.372  EAL: request: mp_malloc_sync
00:06:37.372  EAL: No shared files mode enabled, IPC is disabled
00:06:37.372  EAL: Heap on socket 0 was expanded by 4MB
00:06:37.372  EAL: Calling mem event callback 'spdk:(nil)'
00:06:37.372  EAL: request: mp_malloc_sync
00:06:37.372  EAL: No shared files mode enabled, IPC is disabled
00:06:37.372  EAL: Heap on socket 0 was shrunk by 4MB
00:06:37.372  EAL: Trying to obtain current memory policy.
00:06:37.372  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:37.372  EAL: Restoring previous memory policy: 4
00:06:37.372  EAL: Calling mem event callback 'spdk:(nil)'
00:06:37.372  EAL: request: mp_malloc_sync
00:06:37.372  EAL: No shared files mode enabled, IPC is disabled
00:06:37.372  EAL: Heap on socket 0 was expanded by 6MB
00:06:37.372  EAL: Calling mem event callback 'spdk:(nil)'
00:06:37.372  EAL: request: mp_malloc_sync
00:06:37.372  EAL: No shared files mode enabled, IPC is disabled
00:06:37.372  EAL: Heap on socket 0 was shrunk by 6MB
00:06:37.372  EAL: Trying to obtain current memory policy.
00:06:37.372  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:37.372  EAL: Restoring previous memory policy: 4
00:06:37.372  EAL: Calling mem event callback 'spdk:(nil)'
00:06:37.372  EAL: request: mp_malloc_sync
00:06:37.372  EAL: No shared files mode enabled, IPC is disabled
00:06:37.372  EAL: Heap on socket 0 was expanded by 10MB
00:06:37.372  EAL: Calling mem event callback 'spdk:(nil)'
00:06:37.372  EAL: request: mp_malloc_sync
00:06:37.372  EAL: No shared files mode enabled, IPC is disabled
00:06:37.372  EAL: Heap on socket 0 was shrunk by 10MB
00:06:37.372  EAL: Trying to obtain current memory policy.
00:06:37.372  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:37.372  EAL: Restoring previous memory policy: 4
00:06:37.372  EAL: Calling mem event callback 'spdk:(nil)'
00:06:37.372  EAL: request: mp_malloc_sync
00:06:37.372  EAL: No shared files mode enabled, IPC is disabled
00:06:37.372  EAL: Heap on socket 0 was expanded by 18MB
00:06:37.372  EAL: Calling mem event callback 'spdk:(nil)'
00:06:37.372  EAL: request: mp_malloc_sync
00:06:37.372  EAL: No shared files mode enabled, IPC is disabled
00:06:37.372  EAL: Heap on socket 0 was shrunk by 18MB
00:06:37.372  EAL: Trying to obtain current memory policy.
00:06:37.372  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:37.372  EAL: Restoring previous memory policy: 4
00:06:37.372  EAL: Calling mem event callback 'spdk:(nil)'
00:06:37.372  EAL: request: mp_malloc_sync
00:06:37.372  EAL: No shared files mode enabled, IPC is disabled
00:06:37.372  EAL: Heap on socket 0 was expanded by 34MB
00:06:37.372  EAL: Calling mem event callback 'spdk:(nil)'
00:06:37.372  EAL: request: mp_malloc_sync
00:06:37.372  EAL: No shared files mode enabled, IPC is disabled
00:06:37.372  EAL: Heap on socket 0 was shrunk by 34MB
00:06:37.372  EAL: Trying to obtain current memory policy.
00:06:37.372  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:37.372  EAL: Restoring previous memory policy: 4
00:06:37.372  EAL: Calling mem event callback 'spdk:(nil)'
00:06:37.372  EAL: request: mp_malloc_sync
00:06:37.372  EAL: No shared files mode enabled, IPC is disabled
00:06:37.372  EAL: Heap on socket 0 was expanded by 66MB
00:06:37.372  EAL: Calling mem event callback 'spdk:(nil)'
00:06:37.372  EAL: request: mp_malloc_sync
00:06:37.372  EAL: No shared files mode enabled, IPC is disabled
00:06:37.372  EAL: Heap on socket 0 was shrunk by 66MB
00:06:37.372  EAL: Trying to obtain current memory policy.
00:06:37.372  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:37.630  EAL: Restoring previous memory policy: 4
00:06:37.630  EAL: Calling mem event callback 'spdk:(nil)'
00:06:37.630  EAL: request: mp_malloc_sync
00:06:37.630  EAL: No shared files mode enabled, IPC is disabled
00:06:37.630  EAL: Heap on socket 0 was expanded by 130MB
00:06:37.630  EAL: Calling mem event callback 'spdk:(nil)'
00:06:37.630  EAL: request: mp_malloc_sync
00:06:37.630  EAL: No shared files mode enabled, IPC is disabled
00:06:37.630  EAL: Heap on socket 0 was shrunk by 130MB
00:06:37.630  EAL: Trying to obtain current memory policy.
00:06:37.630  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:37.630  EAL: Restoring previous memory policy: 4
00:06:37.630  EAL: Calling mem event callback 'spdk:(nil)'
00:06:37.630  EAL: request: mp_malloc_sync
00:06:37.630  EAL: No shared files mode enabled, IPC is disabled
00:06:37.630  EAL: Heap on socket 0 was expanded by 258MB
00:06:37.630  EAL: Calling mem event callback 'spdk:(nil)'
00:06:37.630  EAL: request: mp_malloc_sync
00:06:37.630  EAL: No shared files mode enabled, IPC is disabled
00:06:37.630  EAL: Heap on socket 0 was shrunk by 258MB
00:06:37.630  EAL: Trying to obtain current memory policy.
00:06:37.631  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:37.889  EAL: Restoring previous memory policy: 4
00:06:37.889  EAL: Calling mem event callback 'spdk:(nil)'
00:06:37.889  EAL: request: mp_malloc_sync
00:06:37.889  EAL: No shared files mode enabled, IPC is disabled
00:06:37.889  EAL: Heap on socket 0 was expanded by 514MB
00:06:37.889  EAL: Calling mem event callback 'spdk:(nil)'
00:06:38.147  EAL: request: mp_malloc_sync
00:06:38.147  EAL: No shared files mode enabled, IPC is disabled
00:06:38.147  EAL: Heap on socket 0 was shrunk by 514MB
00:06:38.147  EAL: Trying to obtain current memory policy.
00:06:38.147  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:38.406  EAL: Restoring previous memory policy: 4
00:06:38.406  EAL: Calling mem event callback 'spdk:(nil)'
00:06:38.406  EAL: request: mp_malloc_sync
00:06:38.406  EAL: No shared files mode enabled, IPC is disabled
00:06:38.406  EAL: Heap on socket 0 was expanded by 1026MB
00:06:38.406  EAL: Calling mem event callback 'spdk:(nil)'
00:06:38.664  passed
00:06:38.665  
00:06:38.665  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:38.665                suites      1      1    n/a      0        0
00:06:38.665                 tests      2      2      2      0        0
00:06:38.665               asserts   5183   5183   5183      0      n/a
00:06:38.665  
00:06:38.665  Elapsed time =    1.209 seconds
00:06:38.665  EAL: request: mp_malloc_sync
00:06:38.665  EAL: No shared files mode enabled, IPC is disabled
00:06:38.665  EAL: Heap on socket 0 was shrunk by 1026MB
00:06:38.665  EAL: Calling mem event callback 'spdk:(nil)'
00:06:38.665  EAL: request: mp_malloc_sync
00:06:38.665  EAL: No shared files mode enabled, IPC is disabled
00:06:38.665  EAL: Heap on socket 0 was shrunk by 2MB
00:06:38.665  EAL: No shared files mode enabled, IPC is disabled
00:06:38.665  EAL: No shared files mode enabled, IPC is disabled
00:06:38.665  EAL: No shared files mode enabled, IPC is disabled
00:06:38.665  
00:06:38.665  real	0m1.420s
00:06:38.665  user	0m0.786s
00:06:38.665  sys	0m0.499s
00:06:38.665   18:51:10 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:38.665   18:51:10 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x
00:06:38.665  ************************************
00:06:38.665  END TEST env_vtophys
00:06:38.665  ************************************
00:06:38.665   18:51:10 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:06:38.665   18:51:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:38.665   18:51:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:38.665   18:51:10 env -- common/autotest_common.sh@10 -- # set +x
00:06:38.665  ************************************
00:06:38.665  START TEST env_pci
00:06:38.665  ************************************
00:06:38.665   18:51:10 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:06:38.665  
00:06:38.665  
00:06:38.665       CUnit - A unit testing framework for C - Version 2.1-3
00:06:38.665       http://cunit.sourceforge.net/
00:06:38.665  
00:06:38.665  
00:06:38.665  Suite: pci
00:06:38.665    Test: pci_hook ...[2024-12-13 18:51:10.437056] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 73232 has claimed it
00:06:38.665  passed
00:06:38.665  
00:06:38.665  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:38.665                suites      1      1    n/a      0        0
00:06:38.665                 tests      1      1      1      0        0
00:06:38.665               asserts     25     25     25      0      n/a
00:06:38.665  
00:06:38.665  Elapsed time =    0.002 seconds
00:06:38.665  EAL: Cannot find device (10000:00:01.0)
00:06:38.665  EAL: Failed to attach device on primary process
00:06:38.665  
00:06:38.665  real	0m0.020s
00:06:38.665  user	0m0.008s
00:06:38.665  sys	0m0.011s
00:06:38.665   18:51:10 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:38.665   18:51:10 env.env_pci -- common/autotest_common.sh@10 -- # set +x
00:06:38.665  ************************************
00:06:38.665  END TEST env_pci
00:06:38.665  ************************************
00:06:38.665   18:51:10 env -- env/env.sh@14 -- # argv='-c 0x1 '
00:06:38.923    18:51:10 env -- env/env.sh@15 -- # uname
00:06:38.923   18:51:10 env -- env/env.sh@15 -- # '[' Linux = Linux ']'
00:06:38.923   18:51:10 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000
00:06:38.923   18:51:10 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:06:38.923   18:51:10 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:06:38.923   18:51:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:38.923   18:51:10 env -- common/autotest_common.sh@10 -- # set +x
00:06:38.923  ************************************
00:06:38.923  START TEST env_dpdk_post_init
00:06:38.923  ************************************
00:06:38.923   18:51:10 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:06:38.923  EAL: Detected CPU lcores: 10
00:06:38.923  EAL: Detected NUMA nodes: 1
00:06:38.923  EAL: Detected shared linkage of DPDK
00:06:38.923  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:06:38.923  EAL: Selected IOVA mode 'PA'
00:06:38.923  TELEMETRY: No legacy callbacks, legacy socket not created
00:06:38.923  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1)
00:06:38.923  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1)
00:06:38.923  Starting DPDK initialization...
00:06:38.923  Starting SPDK post initialization...
00:06:38.923  SPDK NVMe probe
00:06:38.923  Attaching to 0000:00:10.0
00:06:38.923  Attaching to 0000:00:11.0
00:06:38.923  Attached to 0000:00:10.0
00:06:38.923  Attached to 0000:00:11.0
00:06:38.923  Cleaning up...
00:06:38.923  
00:06:38.923  real	0m0.183s
00:06:38.923  user	0m0.052s
00:06:38.923  sys	0m0.032s
00:06:38.923   18:51:10 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:38.923  ************************************
00:06:38.923  END TEST env_dpdk_post_init
00:06:38.923  ************************************
00:06:38.923   18:51:10 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x
00:06:38.923    18:51:10 env -- env/env.sh@26 -- # uname
00:06:38.923   18:51:10 env -- env/env.sh@26 -- # '[' Linux = Linux ']'
00:06:38.924   18:51:10 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:06:38.924   18:51:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:38.924   18:51:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:38.924   18:51:10 env -- common/autotest_common.sh@10 -- # set +x
00:06:38.924  ************************************
00:06:38.924  START TEST env_mem_callbacks
00:06:38.924  ************************************
00:06:38.924   18:51:10 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:06:39.182  EAL: Detected CPU lcores: 10
00:06:39.182  EAL: Detected NUMA nodes: 1
00:06:39.182  EAL: Detected shared linkage of DPDK
00:06:39.182  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:06:39.182  EAL: Selected IOVA mode 'PA'
00:06:39.182  TELEMETRY: No legacy callbacks, legacy socket not created
00:06:39.182  
00:06:39.182  
00:06:39.182       CUnit - A unit testing framework for C - Version 2.1-3
00:06:39.182       http://cunit.sourceforge.net/
00:06:39.182  
00:06:39.182  
00:06:39.182  Suite: memory
00:06:39.182    Test: test ...
00:06:39.182  register 0x200000200000 2097152
00:06:39.182  malloc 3145728
00:06:39.182  register 0x200000400000 4194304
00:06:39.182  buf 0x200000500000 len 3145728 PASSED
00:06:39.182  malloc 64
00:06:39.182  buf 0x2000004fff40 len 64 PASSED
00:06:39.182  malloc 4194304
00:06:39.182  register 0x200000800000 6291456
00:06:39.182  buf 0x200000a00000 len 4194304 PASSED
00:06:39.182  free 0x200000500000 3145728
00:06:39.182  free 0x2000004fff40 64
00:06:39.182  unregister 0x200000400000 4194304 PASSED
00:06:39.182  free 0x200000a00000 4194304
00:06:39.182  unregister 0x200000800000 6291456 PASSED
00:06:39.182  malloc 8388608
00:06:39.182  register 0x200000400000 10485760
00:06:39.182  buf 0x200000600000 len 8388608 PASSED
00:06:39.182  free 0x200000600000 8388608
00:06:39.182  unregister 0x200000400000 10485760 PASSED
00:06:39.182  passed
00:06:39.182  
00:06:39.182  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:39.182                suites      1      1    n/a      0        0
00:06:39.182                 tests      1      1      1      0        0
00:06:39.182               asserts     15     15     15      0      n/a
00:06:39.182  
00:06:39.182  Elapsed time =    0.008 seconds
00:06:39.182  
00:06:39.182  real	0m0.143s
00:06:39.182  user	0m0.018s
00:06:39.182  sys	0m0.024s
00:06:39.182   18:51:10 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:39.182   18:51:10 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x
00:06:39.182  ************************************
00:06:39.182  END TEST env_mem_callbacks
00:06:39.182  ************************************
00:06:39.182  
00:06:39.182  real	0m2.456s
00:06:39.182  user	0m1.279s
00:06:39.182  sys	0m0.827s
00:06:39.182   18:51:10 env -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:39.182  ************************************
00:06:39.182  END TEST env
00:06:39.182   18:51:10 env -- common/autotest_common.sh@10 -- # set +x
00:06:39.182  ************************************
00:06:39.182   18:51:10  -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:06:39.182   18:51:10  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:39.182   18:51:10  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:39.182   18:51:10  -- common/autotest_common.sh@10 -- # set +x
00:06:39.182  ************************************
00:06:39.182  START TEST rpc
00:06:39.182  ************************************
00:06:39.182   18:51:10 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:06:39.441  * Looking for test storage...
00:06:39.441  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc
00:06:39.441    18:51:11 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:06:39.441     18:51:11 rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:06:39.441     18:51:11 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:06:39.441    18:51:11 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:06:39.441    18:51:11 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:39.441    18:51:11 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:39.441    18:51:11 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:39.441    18:51:11 rpc -- scripts/common.sh@336 -- # IFS=.-:
00:06:39.441    18:51:11 rpc -- scripts/common.sh@336 -- # read -ra ver1
00:06:39.441    18:51:11 rpc -- scripts/common.sh@337 -- # IFS=.-:
00:06:39.441    18:51:11 rpc -- scripts/common.sh@337 -- # read -ra ver2
00:06:39.441    18:51:11 rpc -- scripts/common.sh@338 -- # local 'op=<'
00:06:39.441    18:51:11 rpc -- scripts/common.sh@340 -- # ver1_l=2
00:06:39.441    18:51:11 rpc -- scripts/common.sh@341 -- # ver2_l=1
00:06:39.441    18:51:11 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:39.441    18:51:11 rpc -- scripts/common.sh@344 -- # case "$op" in
00:06:39.441    18:51:11 rpc -- scripts/common.sh@345 -- # : 1
00:06:39.441    18:51:11 rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:39.441    18:51:11 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:39.441     18:51:11 rpc -- scripts/common.sh@365 -- # decimal 1
00:06:39.441     18:51:11 rpc -- scripts/common.sh@353 -- # local d=1
00:06:39.441     18:51:11 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:39.441     18:51:11 rpc -- scripts/common.sh@355 -- # echo 1
00:06:39.441    18:51:11 rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:06:39.441     18:51:11 rpc -- scripts/common.sh@366 -- # decimal 2
00:06:39.441     18:51:11 rpc -- scripts/common.sh@353 -- # local d=2
00:06:39.441     18:51:11 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:39.441     18:51:11 rpc -- scripts/common.sh@355 -- # echo 2
00:06:39.441    18:51:11 rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:06:39.441    18:51:11 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:39.441    18:51:11 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:39.441    18:51:11 rpc -- scripts/common.sh@368 -- # return 0
00:06:39.441    18:51:11 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:39.441    18:51:11 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:06:39.441  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:39.441  		--rc genhtml_branch_coverage=1
00:06:39.441  		--rc genhtml_function_coverage=1
00:06:39.441  		--rc genhtml_legend=1
00:06:39.441  		--rc geninfo_all_blocks=1
00:06:39.441  		--rc geninfo_unexecuted_blocks=1
00:06:39.441  		
00:06:39.441  		'
00:06:39.441    18:51:11 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:06:39.441  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:39.441  		--rc genhtml_branch_coverage=1
00:06:39.441  		--rc genhtml_function_coverage=1
00:06:39.441  		--rc genhtml_legend=1
00:06:39.441  		--rc geninfo_all_blocks=1
00:06:39.441  		--rc geninfo_unexecuted_blocks=1
00:06:39.441  		
00:06:39.441  		'
00:06:39.441    18:51:11 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:06:39.441  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:39.441  		--rc genhtml_branch_coverage=1
00:06:39.441  		--rc genhtml_function_coverage=1
00:06:39.441  		--rc genhtml_legend=1
00:06:39.441  		--rc geninfo_all_blocks=1
00:06:39.441  		--rc geninfo_unexecuted_blocks=1
00:06:39.441  		
00:06:39.441  		'
00:06:39.441    18:51:11 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:06:39.441  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:39.441  		--rc genhtml_branch_coverage=1
00:06:39.441  		--rc genhtml_function_coverage=1
00:06:39.441  		--rc genhtml_legend=1
00:06:39.441  		--rc geninfo_all_blocks=1
00:06:39.441  		--rc geninfo_unexecuted_blocks=1
00:06:39.441  		
00:06:39.441  		'
00:06:39.441   18:51:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=73355
00:06:39.441   18:51:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:06:39.441   18:51:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 73355
00:06:39.441   18:51:11 rpc -- common/autotest_common.sh@835 -- # '[' -z 73355 ']'
00:06:39.441   18:51:11 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev
00:06:39.441   18:51:11 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:39.441   18:51:11 rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:39.441  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:39.441   18:51:11 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:39.441   18:51:11 rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:39.441   18:51:11 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:39.441  [2024-12-13 18:51:11.229870] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:06:39.441  [2024-12-13 18:51:11.230004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73355 ]
00:06:39.700  [2024-12-13 18:51:11.368194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:39.700  [2024-12-13 18:51:11.398932] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:06:39.700  [2024-12-13 18:51:11.399007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 73355' to capture a snapshot of events at runtime.
00:06:39.700  [2024-12-13 18:51:11.399033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:06:39.700  [2024-12-13 18:51:11.399041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:06:39.700  [2024-12-13 18:51:11.399047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid73355 for offline analysis/debug.
00:06:39.700  [2024-12-13 18:51:11.399433] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:06:39.959   18:51:11 rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:39.959   18:51:11 rpc -- common/autotest_common.sh@868 -- # return 0
00:06:39.959   18:51:11 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:06:39.959   18:51:11 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:06:39.959   18:51:11 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:06:39.959   18:51:11 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:06:39.959   18:51:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:39.959   18:51:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:39.959   18:51:11 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:39.959  ************************************
00:06:39.959  START TEST rpc_integrity
00:06:39.959  ************************************
00:06:39.959   18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:06:39.959    18:51:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:06:39.959    18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:39.959    18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:39.959    18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:39.959   18:51:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:06:39.959    18:51:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length
00:06:39.959   18:51:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:06:39.959    18:51:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:06:39.959    18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:39.959    18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:39.959    18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:39.959   18:51:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0
00:06:39.959    18:51:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:06:39.959    18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:39.959    18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:39.959    18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:39.959   18:51:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:06:39.959  {
00:06:39.959  "aliases": [
00:06:39.959  "f3cb86c2-610c-4b60-a0b2-3b89167683f3"
00:06:39.959  ],
00:06:39.959  "assigned_rate_limits": {
00:06:39.959  "r_mbytes_per_sec": 0,
00:06:39.959  "rw_ios_per_sec": 0,
00:06:39.959  "rw_mbytes_per_sec": 0,
00:06:39.959  "w_mbytes_per_sec": 0
00:06:39.959  },
00:06:39.959  "block_size": 512,
00:06:39.959  "claimed": false,
00:06:39.959  "driver_specific": {},
00:06:39.959  "memory_domains": [
00:06:39.959  {
00:06:39.959  "dma_device_id": "system",
00:06:39.959  "dma_device_type": 1
00:06:39.959  },
00:06:39.959  {
00:06:39.959  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:39.959  "dma_device_type": 2
00:06:39.959  }
00:06:39.959  ],
00:06:39.959  "name": "Malloc0",
00:06:39.959  "num_blocks": 16384,
00:06:39.959  "product_name": "Malloc disk",
00:06:39.959  "supported_io_types": {
00:06:39.959  "abort": true,
00:06:39.959  "compare": false,
00:06:39.959  "compare_and_write": false,
00:06:39.959  "copy": true,
00:06:39.959  "flush": true,
00:06:39.959  "get_zone_info": false,
00:06:39.959  "nvme_admin": false,
00:06:39.959  "nvme_io": false,
00:06:39.959  "nvme_io_md": false,
00:06:39.959  "nvme_iov_md": false,
00:06:39.959  "read": true,
00:06:39.959  "reset": true,
00:06:39.959  "seek_data": false,
00:06:39.959  "seek_hole": false,
00:06:39.959  "unmap": true,
00:06:39.959  "write": true,
00:06:39.959  "write_zeroes": true,
00:06:39.959  "zcopy": true,
00:06:39.959  "zone_append": false,
00:06:39.959  "zone_management": false
00:06:39.959  },
00:06:39.959  "uuid": "f3cb86c2-610c-4b60-a0b2-3b89167683f3",
00:06:39.959  "zoned": false
00:06:39.959  }
00:06:39.959  ]'
00:06:39.959    18:51:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length
00:06:40.218   18:51:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:06:40.218   18:51:11 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:06:40.218   18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:40.218   18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:40.218  [2024-12-13 18:51:11.825990] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:06:40.218  [2024-12-13 18:51:11.826063] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:06:40.218  [2024-12-13 18:51:11.826088] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e89ea0
00:06:40.218  [2024-12-13 18:51:11.826099] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:06:40.218  [2024-12-13 18:51:11.827691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:06:40.218  [2024-12-13 18:51:11.827736] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:06:40.218  Passthru0
00:06:40.218   18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:40.218    18:51:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:06:40.218    18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:40.218    18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:40.218    18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:40.218   18:51:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:06:40.218  {
00:06:40.218  "aliases": [
00:06:40.218  "f3cb86c2-610c-4b60-a0b2-3b89167683f3"
00:06:40.218  ],
00:06:40.218  "assigned_rate_limits": {
00:06:40.218  "r_mbytes_per_sec": 0,
00:06:40.218  "rw_ios_per_sec": 0,
00:06:40.218  "rw_mbytes_per_sec": 0,
00:06:40.218  "w_mbytes_per_sec": 0
00:06:40.218  },
00:06:40.218  "block_size": 512,
00:06:40.218  "claim_type": "exclusive_write",
00:06:40.218  "claimed": true,
00:06:40.218  "driver_specific": {},
00:06:40.218  "memory_domains": [
00:06:40.218  {
00:06:40.218  "dma_device_id": "system",
00:06:40.218  "dma_device_type": 1
00:06:40.218  },
00:06:40.218  {
00:06:40.218  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:40.218  "dma_device_type": 2
00:06:40.218  }
00:06:40.218  ],
00:06:40.218  "name": "Malloc0",
00:06:40.218  "num_blocks": 16384,
00:06:40.218  "product_name": "Malloc disk",
00:06:40.218  "supported_io_types": {
00:06:40.218  "abort": true,
00:06:40.218  "compare": false,
00:06:40.218  "compare_and_write": false,
00:06:40.218  "copy": true,
00:06:40.218  "flush": true,
00:06:40.218  "get_zone_info": false,
00:06:40.218  "nvme_admin": false,
00:06:40.218  "nvme_io": false,
00:06:40.218  "nvme_io_md": false,
00:06:40.218  "nvme_iov_md": false,
00:06:40.218  "read": true,
00:06:40.218  "reset": true,
00:06:40.218  "seek_data": false,
00:06:40.218  "seek_hole": false,
00:06:40.218  "unmap": true,
00:06:40.218  "write": true,
00:06:40.218  "write_zeroes": true,
00:06:40.218  "zcopy": true,
00:06:40.218  "zone_append": false,
00:06:40.218  "zone_management": false
00:06:40.218  },
00:06:40.218  "uuid": "f3cb86c2-610c-4b60-a0b2-3b89167683f3",
00:06:40.218  "zoned": false
00:06:40.218  },
00:06:40.218  {
00:06:40.218  "aliases": [
00:06:40.218  "fa1850f1-9d6d-5d3c-9c35-348a792785e0"
00:06:40.218  ],
00:06:40.218  "assigned_rate_limits": {
00:06:40.218  "r_mbytes_per_sec": 0,
00:06:40.218  "rw_ios_per_sec": 0,
00:06:40.218  "rw_mbytes_per_sec": 0,
00:06:40.218  "w_mbytes_per_sec": 0
00:06:40.218  },
00:06:40.218  "block_size": 512,
00:06:40.218  "claimed": false,
00:06:40.218  "driver_specific": {
00:06:40.218  "passthru": {
00:06:40.218  "base_bdev_name": "Malloc0",
00:06:40.218  "name": "Passthru0"
00:06:40.218  }
00:06:40.218  },
00:06:40.218  "memory_domains": [
00:06:40.218  {
00:06:40.218  "dma_device_id": "system",
00:06:40.218  "dma_device_type": 1
00:06:40.218  },
00:06:40.218  {
00:06:40.218  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:40.218  "dma_device_type": 2
00:06:40.218  }
00:06:40.218  ],
00:06:40.218  "name": "Passthru0",
00:06:40.218  "num_blocks": 16384,
00:06:40.218  "product_name": "passthru",
00:06:40.218  "supported_io_types": {
00:06:40.218  "abort": true,
00:06:40.218  "compare": false,
00:06:40.218  "compare_and_write": false,
00:06:40.218  "copy": true,
00:06:40.218  "flush": true,
00:06:40.218  "get_zone_info": false,
00:06:40.218  "nvme_admin": false,
00:06:40.218  "nvme_io": false,
00:06:40.218  "nvme_io_md": false,
00:06:40.218  "nvme_iov_md": false,
00:06:40.218  "read": true,
00:06:40.218  "reset": true,
00:06:40.218  "seek_data": false,
00:06:40.218  "seek_hole": false,
00:06:40.218  "unmap": true,
00:06:40.218  "write": true,
00:06:40.218  "write_zeroes": true,
00:06:40.218  "zcopy": true,
00:06:40.218  "zone_append": false,
00:06:40.218  "zone_management": false
00:06:40.218  },
00:06:40.218  "uuid": "fa1850f1-9d6d-5d3c-9c35-348a792785e0",
00:06:40.218  "zoned": false
00:06:40.218  }
00:06:40.218  ]'
00:06:40.218    18:51:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length
00:06:40.218   18:51:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:06:40.218   18:51:11 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:06:40.218   18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:40.218   18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:40.218   18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:40.218   18:51:11 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:06:40.218   18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:40.218   18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:40.218   18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:40.218    18:51:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:06:40.218    18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:40.218    18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:40.218    18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:40.218   18:51:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:06:40.218    18:51:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length
00:06:40.218   18:51:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:06:40.218  
00:06:40.218  real	0m0.329s
00:06:40.218  user	0m0.217s
00:06:40.218  sys	0m0.038s
00:06:40.218  ************************************
00:06:40.218  END TEST rpc_integrity
00:06:40.218  ************************************
00:06:40.218   18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:40.218   18:51:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:40.218   18:51:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:06:40.218   18:51:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:40.218   18:51:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:40.477   18:51:12 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:40.477  ************************************
00:06:40.477  START TEST rpc_plugins
00:06:40.477  ************************************
00:06:40.477   18:51:12 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins
00:06:40.477    18:51:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:06:40.477    18:51:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:40.477    18:51:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:40.477    18:51:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:40.477   18:51:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1
00:06:40.477    18:51:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:06:40.477    18:51:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:40.477    18:51:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:40.477    18:51:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:40.477   18:51:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[
00:06:40.477  {
00:06:40.477  "aliases": [
00:06:40.477  "3069cb36-8876-44e2-ab37-c207f1a7a0d0"
00:06:40.477  ],
00:06:40.477  "assigned_rate_limits": {
00:06:40.477  "r_mbytes_per_sec": 0,
00:06:40.477  "rw_ios_per_sec": 0,
00:06:40.477  "rw_mbytes_per_sec": 0,
00:06:40.477  "w_mbytes_per_sec": 0
00:06:40.477  },
00:06:40.477  "block_size": 4096,
00:06:40.477  "claimed": false,
00:06:40.477  "driver_specific": {},
00:06:40.477  "memory_domains": [
00:06:40.477  {
00:06:40.477  "dma_device_id": "system",
00:06:40.477  "dma_device_type": 1
00:06:40.477  },
00:06:40.477  {
00:06:40.477  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:40.477  "dma_device_type": 2
00:06:40.477  }
00:06:40.477  ],
00:06:40.477  "name": "Malloc1",
00:06:40.477  "num_blocks": 256,
00:06:40.477  "product_name": "Malloc disk",
00:06:40.477  "supported_io_types": {
00:06:40.477  "abort": true,
00:06:40.477  "compare": false,
00:06:40.477  "compare_and_write": false,
00:06:40.477  "copy": true,
00:06:40.477  "flush": true,
00:06:40.477  "get_zone_info": false,
00:06:40.477  "nvme_admin": false,
00:06:40.477  "nvme_io": false,
00:06:40.477  "nvme_io_md": false,
00:06:40.477  "nvme_iov_md": false,
00:06:40.477  "read": true,
00:06:40.477  "reset": true,
00:06:40.477  "seek_data": false,
00:06:40.477  "seek_hole": false,
00:06:40.477  "unmap": true,
00:06:40.477  "write": true,
00:06:40.477  "write_zeroes": true,
00:06:40.477  "zcopy": true,
00:06:40.477  "zone_append": false,
00:06:40.477  "zone_management": false
00:06:40.477  },
00:06:40.477  "uuid": "3069cb36-8876-44e2-ab37-c207f1a7a0d0",
00:06:40.477  "zoned": false
00:06:40.477  }
00:06:40.478  ]'
00:06:40.478    18:51:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length
00:06:40.478   18:51:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:06:40.478   18:51:12 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:06:40.478   18:51:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:40.478   18:51:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:40.478   18:51:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:40.478    18:51:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:06:40.478    18:51:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:40.478    18:51:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:40.478    18:51:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:40.478   18:51:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]'
00:06:40.478    18:51:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length
00:06:40.478   18:51:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:06:40.478  
00:06:40.478  real	0m0.166s
00:06:40.478  user	0m0.109s
00:06:40.478  sys	0m0.019s
00:06:40.478   18:51:12 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:40.478  ************************************
00:06:40.478  END TEST rpc_plugins
00:06:40.478   18:51:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:40.478  ************************************
00:06:40.478   18:51:12 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:06:40.478   18:51:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:40.478   18:51:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:40.478   18:51:12 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:40.478  ************************************
00:06:40.478  START TEST rpc_trace_cmd_test
00:06:40.478  ************************************
00:06:40.478   18:51:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test
00:06:40.478   18:51:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info
00:06:40.478    18:51:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:06:40.478    18:51:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:40.478    18:51:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:06:40.478    18:51:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:40.478   18:51:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{
00:06:40.478  "bdev": {
00:06:40.478  "mask": "0x8",
00:06:40.478  "tpoint_mask": "0xffffffffffffffff"
00:06:40.478  },
00:06:40.478  "bdev_nvme": {
00:06:40.478  "mask": "0x4000",
00:06:40.478  "tpoint_mask": "0x0"
00:06:40.478  },
00:06:40.478  "bdev_raid": {
00:06:40.478  "mask": "0x20000",
00:06:40.478  "tpoint_mask": "0x0"
00:06:40.478  },
00:06:40.478  "blob": {
00:06:40.478  "mask": "0x10000",
00:06:40.478  "tpoint_mask": "0x0"
00:06:40.478  },
00:06:40.478  "blobfs": {
00:06:40.478  "mask": "0x80",
00:06:40.478  "tpoint_mask": "0x0"
00:06:40.478  },
00:06:40.478  "dsa": {
00:06:40.478  "mask": "0x200",
00:06:40.478  "tpoint_mask": "0x0"
00:06:40.478  },
00:06:40.478  "ftl": {
00:06:40.478  "mask": "0x40",
00:06:40.478  "tpoint_mask": "0x0"
00:06:40.478  },
00:06:40.478  "iaa": {
00:06:40.478  "mask": "0x1000",
00:06:40.478  "tpoint_mask": "0x0"
00:06:40.478  },
00:06:40.478  "iscsi_conn": {
00:06:40.478  "mask": "0x2",
00:06:40.478  "tpoint_mask": "0x0"
00:06:40.478  },
00:06:40.478  "nvme_pcie": {
00:06:40.478  "mask": "0x800",
00:06:40.478  "tpoint_mask": "0x0"
00:06:40.478  },
00:06:40.478  "nvme_tcp": {
00:06:40.478  "mask": "0x2000",
00:06:40.478  "tpoint_mask": "0x0"
00:06:40.478  },
00:06:40.478  "nvmf_rdma": {
00:06:40.478  "mask": "0x10",
00:06:40.478  "tpoint_mask": "0x0"
00:06:40.478  },
00:06:40.478  "nvmf_tcp": {
00:06:40.478  "mask": "0x20",
00:06:40.478  "tpoint_mask": "0x0"
00:06:40.478  },
00:06:40.478  "scheduler": {
00:06:40.478  "mask": "0x40000",
00:06:40.478  "tpoint_mask": "0x0"
00:06:40.478  },
00:06:40.478  "scsi": {
00:06:40.478  "mask": "0x4",
00:06:40.478  "tpoint_mask": "0x0"
00:06:40.478  },
00:06:40.478  "sock": {
00:06:40.478  "mask": "0x8000",
00:06:40.478  "tpoint_mask": "0x0"
00:06:40.478  },
00:06:40.478  "thread": {
00:06:40.478  "mask": "0x400",
00:06:40.478  "tpoint_mask": "0x0"
00:06:40.478  },
00:06:40.478  "tpoint_group_mask": "0x8",
00:06:40.478  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid73355"
00:06:40.478  }'
00:06:40.478    18:51:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length
00:06:40.737   18:51:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']'
00:06:40.737    18:51:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:06:40.737   18:51:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']'
00:06:40.737    18:51:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:06:40.737   18:51:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']'
00:06:40.737    18:51:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:06:40.737   18:51:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']'
00:06:40.737    18:51:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:06:40.737   18:51:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:06:40.737  
00:06:40.737  real	0m0.278s
00:06:40.737  user	0m0.246s
00:06:40.737  sys	0m0.024s
00:06:40.737   18:51:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:40.737  ************************************
00:06:40.737  END TEST rpc_trace_cmd_test
00:06:40.737  ************************************
00:06:40.737   18:51:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:06:40.996   18:51:12 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]]
00:06:40.996   18:51:12 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc
00:06:40.996   18:51:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:40.996   18:51:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:40.996   18:51:12 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:40.996  ************************************
00:06:40.996  START TEST go_rpc
00:06:40.996  ************************************
00:06:40.996   18:51:12 rpc.go_rpc -- common/autotest_common.sh@1129 -- # go_rpc
00:06:40.996    18:51:12 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc
00:06:40.996   18:51:12 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]'
00:06:40.996    18:51:12 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length
00:06:40.996   18:51:12 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']'
00:06:40.996    18:51:12 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512
00:06:40.996    18:51:12 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:40.996    18:51:12 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:40.996    18:51:12 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:40.996   18:51:12 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2
00:06:40.996    18:51:12 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc
00:06:40.996   18:51:12 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["6dbfa130-82a3-4713-b417-2bef74d72f12"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"6dbfa130-82a3-4713-b417-2bef74d72f12","zoned":false}]'
00:06:40.996    18:51:12 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length
00:06:40.996   18:51:12 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']'
00:06:40.996   18:51:12 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2
00:06:40.996   18:51:12 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:40.996   18:51:12 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:40.996   18:51:12 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:40.996    18:51:12 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc
00:06:40.996   18:51:12 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]'
00:06:40.996    18:51:12 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length
00:06:41.255   18:51:12 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']'
00:06:41.255  
00:06:41.255  real	0m0.222s
00:06:41.255  user	0m0.159s
00:06:41.255  sys	0m0.032s
00:06:41.255   18:51:12 rpc.go_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:41.255   18:51:12 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:41.255  ************************************
00:06:41.255  END TEST go_rpc
00:06:41.255  ************************************
00:06:41.255   18:51:12 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:06:41.255   18:51:12 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:06:41.255   18:51:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:41.255   18:51:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:41.255   18:51:12 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:41.255  ************************************
00:06:41.255  START TEST rpc_daemon_integrity
00:06:41.255  ************************************
00:06:41.255   18:51:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:06:41.255    18:51:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:06:41.255    18:51:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:41.255    18:51:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:41.255    18:51:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:41.255   18:51:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:06:41.255    18:51:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length
00:06:41.255   18:51:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:06:41.255    18:51:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:06:41.255    18:51:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:41.255    18:51:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:41.255    18:51:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:41.255   18:51:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3
00:06:41.255    18:51:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:06:41.255    18:51:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:41.255    18:51:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:41.255    18:51:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:41.255   18:51:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:06:41.255  {
00:06:41.255  "aliases": [
00:06:41.255  "9c4cc768-fb5d-4f32-8f98-42728e1c98c3"
00:06:41.255  ],
00:06:41.255  "assigned_rate_limits": {
00:06:41.255  "r_mbytes_per_sec": 0,
00:06:41.255  "rw_ios_per_sec": 0,
00:06:41.255  "rw_mbytes_per_sec": 0,
00:06:41.255  "w_mbytes_per_sec": 0
00:06:41.255  },
00:06:41.255  "block_size": 512,
00:06:41.255  "claimed": false,
00:06:41.255  "driver_specific": {},
00:06:41.255  "memory_domains": [
00:06:41.255  {
00:06:41.255  "dma_device_id": "system",
00:06:41.255  "dma_device_type": 1
00:06:41.255  },
00:06:41.255  {
00:06:41.255  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:41.255  "dma_device_type": 2
00:06:41.255  }
00:06:41.255  ],
00:06:41.255  "name": "Malloc3",
00:06:41.255  "num_blocks": 16384,
00:06:41.255  "product_name": "Malloc disk",
00:06:41.255  "supported_io_types": {
00:06:41.255  "abort": true,
00:06:41.255  "compare": false,
00:06:41.255  "compare_and_write": false,
00:06:41.255  "copy": true,
00:06:41.255  "flush": true,
00:06:41.255  "get_zone_info": false,
00:06:41.255  "nvme_admin": false,
00:06:41.255  "nvme_io": false,
00:06:41.255  "nvme_io_md": false,
00:06:41.255  "nvme_iov_md": false,
00:06:41.255  "read": true,
00:06:41.255  "reset": true,
00:06:41.255  "seek_data": false,
00:06:41.255  "seek_hole": false,
00:06:41.255  "unmap": true,
00:06:41.255  "write": true,
00:06:41.255  "write_zeroes": true,
00:06:41.255  "zcopy": true,
00:06:41.255  "zone_append": false,
00:06:41.255  "zone_management": false
00:06:41.255  },
00:06:41.255  "uuid": "9c4cc768-fb5d-4f32-8f98-42728e1c98c3",
00:06:41.255  "zoned": false
00:06:41.255  }
00:06:41.255  ]'
00:06:41.255    18:51:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length
00:06:41.255   18:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:06:41.255   18:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0
00:06:41.255   18:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:41.255   18:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:41.255  [2024-12-13 18:51:13.038384] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:06:41.255  [2024-12-13 18:51:13.038459] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:06:41.255  [2024-12-13 18:51:13.038474] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e574e0
00:06:41.255  [2024-12-13 18:51:13.038483] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:06:41.255  [2024-12-13 18:51:13.039807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:06:41.255  [2024-12-13 18:51:13.039835] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:06:41.255  Passthru0
00:06:41.255   18:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:41.255    18:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:06:41.256    18:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:41.256    18:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:41.256    18:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:41.256   18:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:06:41.256  {
00:06:41.256  "aliases": [
00:06:41.256  "9c4cc768-fb5d-4f32-8f98-42728e1c98c3"
00:06:41.256  ],
00:06:41.256  "assigned_rate_limits": {
00:06:41.256  "r_mbytes_per_sec": 0,
00:06:41.256  "rw_ios_per_sec": 0,
00:06:41.256  "rw_mbytes_per_sec": 0,
00:06:41.256  "w_mbytes_per_sec": 0
00:06:41.256  },
00:06:41.256  "block_size": 512,
00:06:41.256  "claim_type": "exclusive_write",
00:06:41.256  "claimed": true,
00:06:41.256  "driver_specific": {},
00:06:41.256  "memory_domains": [
00:06:41.256  {
00:06:41.256  "dma_device_id": "system",
00:06:41.256  "dma_device_type": 1
00:06:41.256  },
00:06:41.256  {
00:06:41.256  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:41.256  "dma_device_type": 2
00:06:41.256  }
00:06:41.256  ],
00:06:41.256  "name": "Malloc3",
00:06:41.256  "num_blocks": 16384,
00:06:41.256  "product_name": "Malloc disk",
00:06:41.256  "supported_io_types": {
00:06:41.256  "abort": true,
00:06:41.256  "compare": false,
00:06:41.256  "compare_and_write": false,
00:06:41.256  "copy": true,
00:06:41.256  "flush": true,
00:06:41.256  "get_zone_info": false,
00:06:41.256  "nvme_admin": false,
00:06:41.256  "nvme_io": false,
00:06:41.256  "nvme_io_md": false,
00:06:41.256  "nvme_iov_md": false,
00:06:41.256  "read": true,
00:06:41.256  "reset": true,
00:06:41.256  "seek_data": false,
00:06:41.256  "seek_hole": false,
00:06:41.256  "unmap": true,
00:06:41.256  "write": true,
00:06:41.256  "write_zeroes": true,
00:06:41.256  "zcopy": true,
00:06:41.256  "zone_append": false,
00:06:41.256  "zone_management": false
00:06:41.256  },
00:06:41.256  "uuid": "9c4cc768-fb5d-4f32-8f98-42728e1c98c3",
00:06:41.256  "zoned": false
00:06:41.256  },
00:06:41.256  {
00:06:41.256  "aliases": [
00:06:41.256  "de0193d8-1a58-5257-9799-4721227253a4"
00:06:41.256  ],
00:06:41.256  "assigned_rate_limits": {
00:06:41.256  "r_mbytes_per_sec": 0,
00:06:41.256  "rw_ios_per_sec": 0,
00:06:41.256  "rw_mbytes_per_sec": 0,
00:06:41.256  "w_mbytes_per_sec": 0
00:06:41.256  },
00:06:41.256  "block_size": 512,
00:06:41.256  "claimed": false,
00:06:41.256  "driver_specific": {
00:06:41.256  "passthru": {
00:06:41.256  "base_bdev_name": "Malloc3",
00:06:41.256  "name": "Passthru0"
00:06:41.256  }
00:06:41.256  },
00:06:41.256  "memory_domains": [
00:06:41.256  {
00:06:41.256  "dma_device_id": "system",
00:06:41.256  "dma_device_type": 1
00:06:41.256  },
00:06:41.256  {
00:06:41.256  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:41.256  "dma_device_type": 2
00:06:41.256  }
00:06:41.256  ],
00:06:41.256  "name": "Passthru0",
00:06:41.256  "num_blocks": 16384,
00:06:41.256  "product_name": "passthru",
00:06:41.256  "supported_io_types": {
00:06:41.256  "abort": true,
00:06:41.256  "compare": false,
00:06:41.256  "compare_and_write": false,
00:06:41.256  "copy": true,
00:06:41.256  "flush": true,
00:06:41.256  "get_zone_info": false,
00:06:41.256  "nvme_admin": false,
00:06:41.256  "nvme_io": false,
00:06:41.256  "nvme_io_md": false,
00:06:41.256  "nvme_iov_md": false,
00:06:41.256  "read": true,
00:06:41.256  "reset": true,
00:06:41.256  "seek_data": false,
00:06:41.256  "seek_hole": false,
00:06:41.256  "unmap": true,
00:06:41.256  "write": true,
00:06:41.256  "write_zeroes": true,
00:06:41.256  "zcopy": true,
00:06:41.256  "zone_append": false,
00:06:41.256  "zone_management": false
00:06:41.256  },
00:06:41.256  "uuid": "de0193d8-1a58-5257-9799-4721227253a4",
00:06:41.256  "zoned": false
00:06:41.256  }
00:06:41.256  ]'
00:06:41.256    18:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length
00:06:41.515   18:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:06:41.515   18:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:06:41.515   18:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:41.515   18:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:41.515   18:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:41.515   18:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3
00:06:41.515   18:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:41.515   18:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:41.515   18:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:41.515    18:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:06:41.515    18:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:41.515    18:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:41.515    18:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:41.515   18:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:06:41.515    18:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length
00:06:41.515   18:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:06:41.515  
00:06:41.515  real	0m0.330s
00:06:41.515  user	0m0.222s
00:06:41.515  sys	0m0.038s
00:06:41.515   18:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:41.515   18:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:41.515  ************************************
00:06:41.515  END TEST rpc_daemon_integrity
00:06:41.515  ************************************
00:06:41.515   18:51:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:06:41.515   18:51:13 rpc -- rpc/rpc.sh@84 -- # killprocess 73355
00:06:41.515   18:51:13 rpc -- common/autotest_common.sh@954 -- # '[' -z 73355 ']'
00:06:41.515   18:51:13 rpc -- common/autotest_common.sh@958 -- # kill -0 73355
00:06:41.515    18:51:13 rpc -- common/autotest_common.sh@959 -- # uname
00:06:41.515   18:51:13 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:41.515    18:51:13 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73355
00:06:41.515   18:51:13 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:41.515   18:51:13 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:41.515  killing process with pid 73355
00:06:41.515   18:51:13 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73355'
00:06:41.515   18:51:13 rpc -- common/autotest_common.sh@973 -- # kill 73355
00:06:41.515   18:51:13 rpc -- common/autotest_common.sh@978 -- # wait 73355
00:06:42.083  
00:06:42.083  real	0m2.660s
00:06:42.083  user	0m3.535s
00:06:42.083  sys	0m0.721s
00:06:42.083  ************************************
00:06:42.083  END TEST rpc
00:06:42.083  ************************************
00:06:42.083   18:51:13 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:42.083   18:51:13 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:42.083   18:51:13  -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh
00:06:42.083   18:51:13  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:42.083   18:51:13  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:42.083   18:51:13  -- common/autotest_common.sh@10 -- # set +x
00:06:42.083  ************************************
00:06:42.083  START TEST skip_rpc
00:06:42.083  ************************************
00:06:42.083   18:51:13 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh
00:06:42.083  * Looking for test storage...
00:06:42.083  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc
00:06:42.083    18:51:13 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:06:42.083     18:51:13 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:06:42.083     18:51:13 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:06:42.083    18:51:13 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:06:42.083    18:51:13 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:42.083    18:51:13 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:42.083    18:51:13 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:42.083    18:51:13 skip_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:06:42.083    18:51:13 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:06:42.083    18:51:13 skip_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:06:42.083    18:51:13 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:06:42.083    18:51:13 skip_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:06:42.083    18:51:13 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:06:42.083    18:51:13 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:06:42.083    18:51:13 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:42.083    18:51:13 skip_rpc -- scripts/common.sh@344 -- # case "$op" in
00:06:42.083    18:51:13 skip_rpc -- scripts/common.sh@345 -- # : 1
00:06:42.083    18:51:13 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:42.083    18:51:13 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:42.083     18:51:13 skip_rpc -- scripts/common.sh@365 -- # decimal 1
00:06:42.083     18:51:13 skip_rpc -- scripts/common.sh@353 -- # local d=1
00:06:42.083     18:51:13 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:42.083     18:51:13 skip_rpc -- scripts/common.sh@355 -- # echo 1
00:06:42.083    18:51:13 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:06:42.083     18:51:13 skip_rpc -- scripts/common.sh@366 -- # decimal 2
00:06:42.083     18:51:13 skip_rpc -- scripts/common.sh@353 -- # local d=2
00:06:42.083     18:51:13 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:42.083     18:51:13 skip_rpc -- scripts/common.sh@355 -- # echo 2
00:06:42.083    18:51:13 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:06:42.083    18:51:13 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:42.083    18:51:13 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:42.083    18:51:13 skip_rpc -- scripts/common.sh@368 -- # return 0
00:06:42.083    18:51:13 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:42.083    18:51:13 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:06:42.083  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:42.083  		--rc genhtml_branch_coverage=1
00:06:42.083  		--rc genhtml_function_coverage=1
00:06:42.083  		--rc genhtml_legend=1
00:06:42.083  		--rc geninfo_all_blocks=1
00:06:42.083  		--rc geninfo_unexecuted_blocks=1
00:06:42.083  		
00:06:42.083  		'
00:06:42.083    18:51:13 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:06:42.083  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:42.083  		--rc genhtml_branch_coverage=1
00:06:42.083  		--rc genhtml_function_coverage=1
00:06:42.083  		--rc genhtml_legend=1
00:06:42.083  		--rc geninfo_all_blocks=1
00:06:42.083  		--rc geninfo_unexecuted_blocks=1
00:06:42.083  		
00:06:42.083  		'
00:06:42.083    18:51:13 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:06:42.083  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:42.083  		--rc genhtml_branch_coverage=1
00:06:42.083  		--rc genhtml_function_coverage=1
00:06:42.083  		--rc genhtml_legend=1
00:06:42.083  		--rc geninfo_all_blocks=1
00:06:42.083  		--rc geninfo_unexecuted_blocks=1
00:06:42.083  		
00:06:42.083  		'
00:06:42.083    18:51:13 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:06:42.083  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:42.083  		--rc genhtml_branch_coverage=1
00:06:42.083  		--rc genhtml_function_coverage=1
00:06:42.083  		--rc genhtml_legend=1
00:06:42.083  		--rc geninfo_all_blocks=1
00:06:42.083  		--rc geninfo_unexecuted_blocks=1
00:06:42.083  		
00:06:42.083  		'
00:06:42.083   18:51:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:06:42.083   18:51:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:06:42.083   18:51:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc
00:06:42.083   18:51:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:42.083   18:51:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:42.083   18:51:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:42.083  ************************************
00:06:42.083  START TEST skip_rpc
00:06:42.083  ************************************
00:06:42.083   18:51:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc
00:06:42.083   18:51:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=73611
00:06:42.083   18:51:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:06:42.083   18:51:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5
00:06:42.083   18:51:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1
00:06:42.343  [2024-12-13 18:51:13.954870] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:06:42.343  [2024-12-13 18:51:13.954980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73611 ]
00:06:42.343  [2024-12-13 18:51:14.101695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:42.343  [2024-12-13 18:51:14.140467] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:47.608    18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:47.608  2024/12/13 18:51:18 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 73611
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 73611 ']'
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 73611
00:06:47.608    18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:47.608    18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73611
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:47.608  killing process with pid 73611
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73611'
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 73611
00:06:47.608   18:51:18 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 73611
00:06:47.608  
00:06:47.608  real	0m5.388s
00:06:47.608  user	0m5.024s
00:06:47.608  sys	0m0.274s
00:06:47.608  ************************************
00:06:47.608  END TEST skip_rpc
00:06:47.608  ************************************
00:06:47.608   18:51:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:47.608   18:51:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:47.608   18:51:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json
00:06:47.608   18:51:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:47.608   18:51:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:47.608   18:51:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:47.608  ************************************
00:06:47.608  START TEST skip_rpc_with_json
00:06:47.608  ************************************
00:06:47.608   18:51:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json
00:06:47.608   18:51:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config
00:06:47.608   18:51:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=73703
00:06:47.608   18:51:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:06:47.608   18:51:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 73703
00:06:47.608   18:51:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:06:47.608   18:51:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 73703 ']'
00:06:47.608   18:51:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:47.608   18:51:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:47.608  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:47.608   18:51:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:47.608   18:51:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:47.608   18:51:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:06:47.608  [2024-12-13 18:51:19.378704] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:06:47.608  [2024-12-13 18:51:19.378814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73703 ]
00:06:47.867  [2024-12-13 18:51:19.518257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:47.867  [2024-12-13 18:51:19.549594] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:06:48.125   18:51:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:48.125   18:51:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0
00:06:48.125   18:51:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:06:48.125   18:51:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:48.125   18:51:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:06:48.125  [2024-12-13 18:51:19.805519] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist
00:06:48.125  2024/12/13 18:51:19 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device
00:06:48.125  request:
00:06:48.125  {
00:06:48.125  "method": "nvmf_get_transports",
00:06:48.125  "params": {
00:06:48.125  "trtype": "tcp"
00:06:48.125  }
00:06:48.125  }
00:06:48.125  Got JSON-RPC error response
00:06:48.125  GoRPCClient: error on JSON-RPC call
00:06:48.125   18:51:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:06:48.125   18:51:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp
00:06:48.125   18:51:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:48.125   18:51:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:06:48.125  [2024-12-13 18:51:19.817735] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:06:48.125   18:51:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:48.125   18:51:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config
00:06:48.125   18:51:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:48.126   18:51:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:06:48.384   18:51:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:48.384   18:51:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:06:48.384  {
00:06:48.384  "subsystems": [
00:06:48.384  {
00:06:48.384  "subsystem": "fsdev",
00:06:48.384  "config": [
00:06:48.384  {
00:06:48.384  "method": "fsdev_set_opts",
00:06:48.384  "params": {
00:06:48.384  "fsdev_io_cache_size": 256,
00:06:48.384  "fsdev_io_pool_size": 65535
00:06:48.384  }
00:06:48.384  }
00:06:48.384  ]
00:06:48.384  },
00:06:48.384  {
00:06:48.384  "subsystem": "vfio_user_target",
00:06:48.384  "config": null
00:06:48.384  },
00:06:48.384  {
00:06:48.384  "subsystem": "keyring",
00:06:48.384  "config": []
00:06:48.384  },
00:06:48.384  {
00:06:48.384  "subsystem": "iobuf",
00:06:48.384  "config": [
00:06:48.384  {
00:06:48.384  "method": "iobuf_set_options",
00:06:48.384  "params": {
00:06:48.384  "enable_numa": false,
00:06:48.384  "large_bufsize": 135168,
00:06:48.384  "large_pool_count": 1024,
00:06:48.384  "small_bufsize": 8192,
00:06:48.384  "small_pool_count": 8192
00:06:48.384  }
00:06:48.384  }
00:06:48.384  ]
00:06:48.384  },
00:06:48.384  {
00:06:48.384  "subsystem": "sock",
00:06:48.384  "config": [
00:06:48.384  {
00:06:48.384  "method": "sock_set_default_impl",
00:06:48.384  "params": {
00:06:48.384  "impl_name": "posix"
00:06:48.384  }
00:06:48.384  },
00:06:48.384  {
00:06:48.384  "method": "sock_impl_set_options",
00:06:48.384  "params": {
00:06:48.384  "enable_ktls": false,
00:06:48.384  "enable_placement_id": 0,
00:06:48.384  "enable_quickack": false,
00:06:48.384  "enable_recv_pipe": true,
00:06:48.384  "enable_zerocopy_send_client": false,
00:06:48.384  "enable_zerocopy_send_server": true,
00:06:48.384  "impl_name": "ssl",
00:06:48.384  "recv_buf_size": 4096,
00:06:48.384  "send_buf_size": 4096,
00:06:48.384  "tls_version": 0,
00:06:48.384  "zerocopy_threshold": 0
00:06:48.384  }
00:06:48.384  },
00:06:48.384  {
00:06:48.384  "method": "sock_impl_set_options",
00:06:48.384  "params": {
00:06:48.384  "enable_ktls": false,
00:06:48.384  "enable_placement_id": 0,
00:06:48.384  "enable_quickack": false,
00:06:48.384  "enable_recv_pipe": true,
00:06:48.384  "enable_zerocopy_send_client": false,
00:06:48.384  "enable_zerocopy_send_server": true,
00:06:48.384  "impl_name": "posix",
00:06:48.384  "recv_buf_size": 2097152,
00:06:48.384  "send_buf_size": 2097152,
00:06:48.384  "tls_version": 0,
00:06:48.384  "zerocopy_threshold": 0
00:06:48.384  }
00:06:48.384  }
00:06:48.384  ]
00:06:48.384  },
00:06:48.384  {
00:06:48.384  "subsystem": "vmd",
00:06:48.384  "config": []
00:06:48.384  },
00:06:48.384  {
00:06:48.384  "subsystem": "accel",
00:06:48.384  "config": [
00:06:48.384  {
00:06:48.384  "method": "accel_set_options",
00:06:48.384  "params": {
00:06:48.384  "buf_count": 2048,
00:06:48.384  "large_cache_size": 16,
00:06:48.384  "sequence_count": 2048,
00:06:48.384  "small_cache_size": 128,
00:06:48.384  "task_count": 2048
00:06:48.384  }
00:06:48.384  }
00:06:48.384  ]
00:06:48.384  },
00:06:48.384  {
00:06:48.384  "subsystem": "bdev",
00:06:48.385  "config": [
00:06:48.385  {
00:06:48.385  "method": "bdev_set_options",
00:06:48.385  "params": {
00:06:48.385  "bdev_auto_examine": true,
00:06:48.385  "bdev_io_cache_size": 256,
00:06:48.385  "bdev_io_pool_size": 65535,
00:06:48.385  "iobuf_large_cache_size": 16,
00:06:48.385  "iobuf_small_cache_size": 128
00:06:48.385  }
00:06:48.385  },
00:06:48.385  {
00:06:48.385  "method": "bdev_raid_set_options",
00:06:48.385  "params": {
00:06:48.385  "process_max_bandwidth_mb_sec": 0,
00:06:48.385  "process_window_size_kb": 1024
00:06:48.385  }
00:06:48.385  },
00:06:48.385  {
00:06:48.385  "method": "bdev_iscsi_set_options",
00:06:48.385  "params": {
00:06:48.385  "timeout_sec": 30
00:06:48.385  }
00:06:48.385  },
00:06:48.385  {
00:06:48.385  "method": "bdev_nvme_set_options",
00:06:48.385  "params": {
00:06:48.385  "action_on_timeout": "none",
00:06:48.385  "allow_accel_sequence": false,
00:06:48.385  "arbitration_burst": 0,
00:06:48.385  "bdev_retry_count": 3,
00:06:48.385  "ctrlr_loss_timeout_sec": 0,
00:06:48.385  "delay_cmd_submit": true,
00:06:48.385  "dhchap_dhgroups": [
00:06:48.385  "null",
00:06:48.385  "ffdhe2048",
00:06:48.385  "ffdhe3072",
00:06:48.385  "ffdhe4096",
00:06:48.385  "ffdhe6144",
00:06:48.385  "ffdhe8192"
00:06:48.385  ],
00:06:48.385  "dhchap_digests": [
00:06:48.385  "sha256",
00:06:48.385  "sha384",
00:06:48.385  "sha512"
00:06:48.385  ],
00:06:48.385  "disable_auto_failback": false,
00:06:48.385  "fast_io_fail_timeout_sec": 0,
00:06:48.385  "generate_uuids": false,
00:06:48.385  "high_priority_weight": 0,
00:06:48.385  "io_path_stat": false,
00:06:48.385  "io_queue_requests": 0,
00:06:48.385  "keep_alive_timeout_ms": 10000,
00:06:48.385  "low_priority_weight": 0,
00:06:48.385  "medium_priority_weight": 0,
00:06:48.385  "nvme_adminq_poll_period_us": 10000,
00:06:48.385  "nvme_error_stat": false,
00:06:48.385  "nvme_ioq_poll_period_us": 0,
00:06:48.385  "rdma_cm_event_timeout_ms": 0,
00:06:48.385  "rdma_max_cq_size": 0,
00:06:48.385  "rdma_srq_size": 0,
00:06:48.385  "rdma_umr_per_io": false,
00:06:48.385  "reconnect_delay_sec": 0,
00:06:48.385  "timeout_admin_us": 0,
00:06:48.385  "timeout_us": 0,
00:06:48.385  "transport_ack_timeout": 0,
00:06:48.385  "transport_retry_count": 4,
00:06:48.385  "transport_tos": 0
00:06:48.385  }
00:06:48.385  },
00:06:48.385  {
00:06:48.385  "method": "bdev_nvme_set_hotplug",
00:06:48.385  "params": {
00:06:48.385  "enable": false,
00:06:48.385  "period_us": 100000
00:06:48.385  }
00:06:48.385  },
00:06:48.385  {
00:06:48.385  "method": "bdev_wait_for_examine"
00:06:48.385  }
00:06:48.385  ]
00:06:48.385  },
00:06:48.385  {
00:06:48.385  "subsystem": "scsi",
00:06:48.385  "config": null
00:06:48.385  },
00:06:48.385  {
00:06:48.385  "subsystem": "scheduler",
00:06:48.385  "config": [
00:06:48.385  {
00:06:48.385  "method": "framework_set_scheduler",
00:06:48.385  "params": {
00:06:48.385  "name": "static"
00:06:48.385  }
00:06:48.385  }
00:06:48.385  ]
00:06:48.385  },
00:06:48.385  {
00:06:48.385  "subsystem": "vhost_scsi",
00:06:48.385  "config": []
00:06:48.385  },
00:06:48.385  {
00:06:48.385  "subsystem": "vhost_blk",
00:06:48.385  "config": []
00:06:48.385  },
00:06:48.385  {
00:06:48.385  "subsystem": "ublk",
00:06:48.385  "config": []
00:06:48.385  },
00:06:48.385  {
00:06:48.385  "subsystem": "nbd",
00:06:48.385  "config": []
00:06:48.385  },
00:06:48.385  {
00:06:48.385  "subsystem": "nvmf",
00:06:48.385  "config": [
00:06:48.385  {
00:06:48.385  "method": "nvmf_set_config",
00:06:48.385  "params": {
00:06:48.385  "admin_cmd_passthru": {
00:06:48.385  "identify_ctrlr": false
00:06:48.385  },
00:06:48.385  "dhchap_dhgroups": [
00:06:48.385  "null",
00:06:48.385  "ffdhe2048",
00:06:48.385  "ffdhe3072",
00:06:48.385  "ffdhe4096",
00:06:48.385  "ffdhe6144",
00:06:48.385  "ffdhe8192"
00:06:48.385  ],
00:06:48.385  "dhchap_digests": [
00:06:48.385  "sha256",
00:06:48.385  "sha384",
00:06:48.385  "sha512"
00:06:48.385  ],
00:06:48.385  "discovery_filter": "match_any"
00:06:48.385  }
00:06:48.385  },
00:06:48.385  {
00:06:48.385  "method": "nvmf_set_max_subsystems",
00:06:48.385  "params": {
00:06:48.385  "max_subsystems": 1024
00:06:48.385  }
00:06:48.385  },
00:06:48.385  {
00:06:48.385  "method": "nvmf_set_crdt",
00:06:48.385  "params": {
00:06:48.385  "crdt1": 0,
00:06:48.385  "crdt2": 0,
00:06:48.385  "crdt3": 0
00:06:48.385  }
00:06:48.385  },
00:06:48.385  {
00:06:48.385  "method": "nvmf_create_transport",
00:06:48.385  "params": {
00:06:48.385  "abort_timeout_sec": 1,
00:06:48.385  "ack_timeout": 0,
00:06:48.385  "buf_cache_size": 4294967295,
00:06:48.385  "c2h_success": true,
00:06:48.385  "data_wr_pool_size": 0,
00:06:48.385  "dif_insert_or_strip": false,
00:06:48.385  "in_capsule_data_size": 4096,
00:06:48.385  "io_unit_size": 131072,
00:06:48.385  "max_aq_depth": 128,
00:06:48.385  "max_io_qpairs_per_ctrlr": 127,
00:06:48.385  "max_io_size": 131072,
00:06:48.385  "max_queue_depth": 128,
00:06:48.385  "num_shared_buffers": 511,
00:06:48.385  "sock_priority": 0,
00:06:48.385  "trtype": "TCP",
00:06:48.385  "zcopy": false
00:06:48.385  }
00:06:48.385  }
00:06:48.385  ]
00:06:48.385  },
00:06:48.385  {
00:06:48.385  "subsystem": "iscsi",
00:06:48.385  "config": [
00:06:48.385  {
00:06:48.385  "method": "iscsi_set_options",
00:06:48.385  "params": {
00:06:48.385  "allow_duplicated_isid": false,
00:06:48.385  "chap_group": 0,
00:06:48.385  "data_out_pool_size": 2048,
00:06:48.385  "default_time2retain": 20,
00:06:48.385  "default_time2wait": 2,
00:06:48.385  "disable_chap": false,
00:06:48.385  "error_recovery_level": 0,
00:06:48.385  "first_burst_length": 8192,
00:06:48.385  "immediate_data": true,
00:06:48.385  "immediate_data_pool_size": 16384,
00:06:48.385  "max_connections_per_session": 2,
00:06:48.385  "max_large_datain_per_connection": 64,
00:06:48.385  "max_queue_depth": 64,
00:06:48.385  "max_r2t_per_connection": 4,
00:06:48.385  "max_sessions": 128,
00:06:48.385  "mutual_chap": false,
00:06:48.385  "node_base": "iqn.2016-06.io.spdk",
00:06:48.385  "nop_in_interval": 30,
00:06:48.385  "nop_timeout": 60,
00:06:48.385  "pdu_pool_size": 36864,
00:06:48.385  "require_chap": false
00:06:48.385  }
00:06:48.385  }
00:06:48.385  ]
00:06:48.385  }
00:06:48.385  ]
00:06:48.385  }
00:06:48.385   18:51:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:06:48.385   18:51:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 73703
00:06:48.385   18:51:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 73703 ']'
00:06:48.385   18:51:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 73703
00:06:48.385    18:51:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:06:48.385   18:51:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:48.385    18:51:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73703
00:06:48.385   18:51:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:48.385  killing process with pid 73703
00:06:48.385   18:51:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:48.385   18:51:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73703'
00:06:48.385   18:51:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 73703
00:06:48.385   18:51:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 73703
00:06:48.649   18:51:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=73729
00:06:48.649   18:51:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:06:48.649   18:51:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5
00:06:53.921   18:51:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 73729
00:06:53.921   18:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 73729 ']'
00:06:53.921   18:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 73729
00:06:53.921    18:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:06:53.921   18:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:53.921    18:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73729
00:06:53.921   18:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:53.921  killing process with pid 73729
00:06:53.921   18:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:53.921   18:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73729'
00:06:53.921   18:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 73729
00:06:53.921   18:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 73729
00:06:54.180   18:51:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:06:54.180   18:51:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:06:54.180  
00:06:54.180  real	0m6.470s
00:06:54.180  user	0m6.007s
00:06:54.180  sys	0m0.617s
00:06:54.180   18:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:54.180   18:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:06:54.180  ************************************
00:06:54.180  END TEST skip_rpc_with_json
00:06:54.180  ************************************
00:06:54.180   18:51:25 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay
00:06:54.180   18:51:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:54.180   18:51:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:54.180   18:51:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:54.180  ************************************
00:06:54.180  START TEST skip_rpc_with_delay
00:06:54.180  ************************************
00:06:54.180   18:51:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay
00:06:54.180   18:51:25 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:06:54.180   18:51:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0
00:06:54.180   18:51:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:06:54.180   18:51:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:54.180   18:51:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:54.180    18:51:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:54.180   18:51:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:54.180    18:51:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:54.180   18:51:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:54.180   18:51:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:54.180   18:51:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]]
00:06:54.180   18:51:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:06:54.180  [2024-12-13 18:51:25.925671] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started.
00:06:54.180   18:51:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1
00:06:54.180   18:51:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:54.180   18:51:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:06:54.180   18:51:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:54.180  
00:06:54.180  real	0m0.105s
00:06:54.180  user	0m0.065s
00:06:54.180  sys	0m0.039s
00:06:54.180   18:51:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:54.180  ************************************
00:06:54.180  END TEST skip_rpc_with_delay
00:06:54.180  ************************************
00:06:54.180   18:51:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x
00:06:54.180    18:51:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname
00:06:54.180   18:51:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']'
00:06:54.180   18:51:25 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init
00:06:54.180   18:51:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:54.180   18:51:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:54.180   18:51:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:54.438  ************************************
00:06:54.438  START TEST exit_on_failed_rpc_init
00:06:54.438  ************************************
00:06:54.438   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init
00:06:54.438   18:51:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=73833
00:06:54.439   18:51:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 73833
00:06:54.439   18:51:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:06:54.439   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 73833 ']'
00:06:54.439   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:54.439   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:54.439  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:54.439   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:54.439   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:54.439   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:06:54.439  [2024-12-13 18:51:26.062164] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:06:54.439  [2024-12-13 18:51:26.062295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73833 ]
00:06:54.439  [2024-12-13 18:51:26.201186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:54.439  [2024-12-13 18:51:26.232805] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:06:54.697   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:54.697   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0
00:06:54.697   18:51:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:06:54.697   18:51:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:06:54.697   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0
00:06:54.697   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:06:54.697   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:54.697   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:54.697    18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:54.697   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:54.697    18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:54.697   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:54.697   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:54.697   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]]
00:06:54.697   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:06:54.955  [2024-12-13 18:51:26.566345] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:06:54.956  [2024-12-13 18:51:26.566465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73850 ]
00:06:54.956  [2024-12-13 18:51:26.721001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:54.956  [2024-12-13 18:51:26.757832] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:06:54.956  [2024-12-13 18:51:26.757955] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:06:54.956  [2024-12-13 18:51:26.757972] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:06:54.956  [2024-12-13 18:51:26.757983] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:06:55.214   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234
00:06:55.214   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:55.214   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106
00:06:55.214   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in
00:06:55.215   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1
00:06:55.215   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:55.215   18:51:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:06:55.215   18:51:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 73833
00:06:55.215   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 73833 ']'
00:06:55.215   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 73833
00:06:55.215    18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname
00:06:55.215   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:55.215    18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73833
00:06:55.215   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:55.215  killing process with pid 73833
00:06:55.215   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:55.215   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73833'
00:06:55.215   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 73833
00:06:55.215   18:51:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 73833
00:06:55.473  
00:06:55.473  real	0m1.187s
00:06:55.473  user	0m1.241s
00:06:55.473  sys	0m0.388s
00:06:55.473  ************************************
00:06:55.473  END TEST exit_on_failed_rpc_init
00:06:55.473  ************************************
00:06:55.473   18:51:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:55.473   18:51:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:06:55.473   18:51:27 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:06:55.473  
00:06:55.473  real	0m13.556s
00:06:55.473  user	0m12.519s
00:06:55.473  sys	0m1.525s
00:06:55.473   18:51:27 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:55.473  ************************************
00:06:55.473  END TEST skip_rpc
00:06:55.473  ************************************
00:06:55.473   18:51:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:55.473   18:51:27  -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:06:55.473   18:51:27  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:55.473   18:51:27  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:55.473   18:51:27  -- common/autotest_common.sh@10 -- # set +x
00:06:55.473  ************************************
00:06:55.473  START TEST rpc_client
00:06:55.473  ************************************
00:06:55.473   18:51:27 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:06:55.732  * Looking for test storage...
00:06:55.732  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client
00:06:55.732    18:51:27 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:06:55.732     18:51:27 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version
00:06:55.732     18:51:27 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:06:55.732    18:51:27 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:06:55.732    18:51:27 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:55.732    18:51:27 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:55.732    18:51:27 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:55.732    18:51:27 rpc_client -- scripts/common.sh@336 -- # IFS=.-:
00:06:55.732    18:51:27 rpc_client -- scripts/common.sh@336 -- # read -ra ver1
00:06:55.732    18:51:27 rpc_client -- scripts/common.sh@337 -- # IFS=.-:
00:06:55.732    18:51:27 rpc_client -- scripts/common.sh@337 -- # read -ra ver2
00:06:55.732    18:51:27 rpc_client -- scripts/common.sh@338 -- # local 'op=<'
00:06:55.732    18:51:27 rpc_client -- scripts/common.sh@340 -- # ver1_l=2
00:06:55.732    18:51:27 rpc_client -- scripts/common.sh@341 -- # ver2_l=1
00:06:55.732    18:51:27 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:55.732    18:51:27 rpc_client -- scripts/common.sh@344 -- # case "$op" in
00:06:55.732    18:51:27 rpc_client -- scripts/common.sh@345 -- # : 1
00:06:55.732    18:51:27 rpc_client -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:55.732    18:51:27 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:55.732     18:51:27 rpc_client -- scripts/common.sh@365 -- # decimal 1
00:06:55.732     18:51:27 rpc_client -- scripts/common.sh@353 -- # local d=1
00:06:55.732     18:51:27 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:55.732     18:51:27 rpc_client -- scripts/common.sh@355 -- # echo 1
00:06:55.732    18:51:27 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1
00:06:55.732     18:51:27 rpc_client -- scripts/common.sh@366 -- # decimal 2
00:06:55.732     18:51:27 rpc_client -- scripts/common.sh@353 -- # local d=2
00:06:55.732     18:51:27 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:55.732     18:51:27 rpc_client -- scripts/common.sh@355 -- # echo 2
00:06:55.732    18:51:27 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2
00:06:55.732    18:51:27 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:55.732    18:51:27 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:55.732    18:51:27 rpc_client -- scripts/common.sh@368 -- # return 0
00:06:55.732    18:51:27 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:55.732    18:51:27 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:06:55.732  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.732  		--rc genhtml_branch_coverage=1
00:06:55.732  		--rc genhtml_function_coverage=1
00:06:55.732  		--rc genhtml_legend=1
00:06:55.732  		--rc geninfo_all_blocks=1
00:06:55.732  		--rc geninfo_unexecuted_blocks=1
00:06:55.732  		
00:06:55.732  		'
00:06:55.732    18:51:27 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:06:55.732  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.732  		--rc genhtml_branch_coverage=1
00:06:55.732  		--rc genhtml_function_coverage=1
00:06:55.732  		--rc genhtml_legend=1
00:06:55.732  		--rc geninfo_all_blocks=1
00:06:55.732  		--rc geninfo_unexecuted_blocks=1
00:06:55.732  		
00:06:55.732  		'
00:06:55.732    18:51:27 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:06:55.732  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.732  		--rc genhtml_branch_coverage=1
00:06:55.732  		--rc genhtml_function_coverage=1
00:06:55.732  		--rc genhtml_legend=1
00:06:55.732  		--rc geninfo_all_blocks=1
00:06:55.732  		--rc geninfo_unexecuted_blocks=1
00:06:55.732  		
00:06:55.732  		'
00:06:55.732    18:51:27 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:06:55.732  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.732  		--rc genhtml_branch_coverage=1
00:06:55.732  		--rc genhtml_function_coverage=1
00:06:55.732  		--rc genhtml_legend=1
00:06:55.732  		--rc geninfo_all_blocks=1
00:06:55.732  		--rc geninfo_unexecuted_blocks=1
00:06:55.732  		
00:06:55.732  		'
00:06:55.732   18:51:27 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test
00:06:55.732  OK
00:06:55.732   18:51:27 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:06:55.732  
00:06:55.732  real	0m0.206s
00:06:55.732  user	0m0.124s
00:06:55.732  sys	0m0.093s
00:06:55.732   18:51:27 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:55.732  ************************************
00:06:55.732  END TEST rpc_client
00:06:55.732  ************************************
00:06:55.732   18:51:27 rpc_client -- common/autotest_common.sh@10 -- # set +x
00:06:55.732   18:51:27  -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:06:55.732   18:51:27  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:55.732   18:51:27  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:55.732   18:51:27  -- common/autotest_common.sh@10 -- # set +x
00:06:55.732  ************************************
00:06:55.732  START TEST json_config
00:06:55.732  ************************************
00:06:55.732   18:51:27 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:06:55.992    18:51:27 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:06:55.992     18:51:27 json_config -- common/autotest_common.sh@1711 -- # lcov --version
00:06:55.992     18:51:27 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:06:55.992    18:51:27 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:06:55.992    18:51:27 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:55.992    18:51:27 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:55.992    18:51:27 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:55.992    18:51:27 json_config -- scripts/common.sh@336 -- # IFS=.-:
00:06:55.992    18:51:27 json_config -- scripts/common.sh@336 -- # read -ra ver1
00:06:55.992    18:51:27 json_config -- scripts/common.sh@337 -- # IFS=.-:
00:06:55.992    18:51:27 json_config -- scripts/common.sh@337 -- # read -ra ver2
00:06:55.992    18:51:27 json_config -- scripts/common.sh@338 -- # local 'op=<'
00:06:55.992    18:51:27 json_config -- scripts/common.sh@340 -- # ver1_l=2
00:06:55.992    18:51:27 json_config -- scripts/common.sh@341 -- # ver2_l=1
00:06:55.992    18:51:27 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:55.992    18:51:27 json_config -- scripts/common.sh@344 -- # case "$op" in
00:06:55.992    18:51:27 json_config -- scripts/common.sh@345 -- # : 1
00:06:55.992    18:51:27 json_config -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:55.992    18:51:27 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:55.992     18:51:27 json_config -- scripts/common.sh@365 -- # decimal 1
00:06:55.992     18:51:27 json_config -- scripts/common.sh@353 -- # local d=1
00:06:55.992     18:51:27 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:55.992     18:51:27 json_config -- scripts/common.sh@355 -- # echo 1
00:06:55.992    18:51:27 json_config -- scripts/common.sh@365 -- # ver1[v]=1
00:06:55.992     18:51:27 json_config -- scripts/common.sh@366 -- # decimal 2
00:06:55.992     18:51:27 json_config -- scripts/common.sh@353 -- # local d=2
00:06:55.992     18:51:27 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:55.992     18:51:27 json_config -- scripts/common.sh@355 -- # echo 2
00:06:55.992    18:51:27 json_config -- scripts/common.sh@366 -- # ver2[v]=2
00:06:55.992    18:51:27 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:55.992    18:51:27 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:55.992    18:51:27 json_config -- scripts/common.sh@368 -- # return 0
00:06:55.992    18:51:27 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:55.992    18:51:27 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:06:55.992  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.992  		--rc genhtml_branch_coverage=1
00:06:55.992  		--rc genhtml_function_coverage=1
00:06:55.992  		--rc genhtml_legend=1
00:06:55.992  		--rc geninfo_all_blocks=1
00:06:55.992  		--rc geninfo_unexecuted_blocks=1
00:06:55.992  		
00:06:55.992  		'
00:06:55.992    18:51:27 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:06:55.992  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.992  		--rc genhtml_branch_coverage=1
00:06:55.992  		--rc genhtml_function_coverage=1
00:06:55.992  		--rc genhtml_legend=1
00:06:55.992  		--rc geninfo_all_blocks=1
00:06:55.992  		--rc geninfo_unexecuted_blocks=1
00:06:55.992  		
00:06:55.992  		'
00:06:55.992    18:51:27 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:06:55.992  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.992  		--rc genhtml_branch_coverage=1
00:06:55.992  		--rc genhtml_function_coverage=1
00:06:55.992  		--rc genhtml_legend=1
00:06:55.992  		--rc geninfo_all_blocks=1
00:06:55.992  		--rc geninfo_unexecuted_blocks=1
00:06:55.992  		
00:06:55.992  		'
00:06:55.992    18:51:27 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:06:55.992  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.992  		--rc genhtml_branch_coverage=1
00:06:55.992  		--rc genhtml_function_coverage=1
00:06:55.992  		--rc genhtml_legend=1
00:06:55.992  		--rc geninfo_all_blocks=1
00:06:55.992  		--rc geninfo_unexecuted_blocks=1
00:06:55.992  		
00:06:55.992  		'
00:06:55.992   18:51:27 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:06:55.992     18:51:27 json_config -- nvmf/common.sh@7 -- # uname -s
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:06:55.992     18:51:27 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:06:55.992     18:51:27 json_config -- scripts/common.sh@15 -- # shopt -s extglob
00:06:55.992     18:51:27 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:06:55.992     18:51:27 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:06:55.992     18:51:27 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:06:55.992      18:51:27 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:55.992      18:51:27 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:55.992      18:51:27 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:55.992      18:51:27 json_config -- paths/export.sh@5 -- # export PATH
00:06:55.992      18:51:27 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@51 -- # : 0
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:06:55.992  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:06:55.992    18:51:27 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0
00:06:55.992   18:51:27 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh
00:06:55.992   18:51:27 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]]
00:06:55.993   18:51:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]]
00:06:55.993   18:51:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]]
00:06:55.993   18:51:27 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:06:55.993   18:51:27 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='')
00:06:55.993   18:51:27 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid
00:06:55.993   18:51:27 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock')
00:06:55.993   18:51:27 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket
00:06:55.993   18:51:27 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024')
00:06:55.993   18:51:27 json_config -- json_config/json_config.sh@33 -- # declare -A app_params
00:06:55.993   18:51:27 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json')
00:06:55.993   18:51:27 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path
00:06:55.993   18:51:27 json_config -- json_config/json_config.sh@40 -- # last_event_id=0
00:06:55.993   18:51:27 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:06:55.993  INFO: JSON configuration test init
00:06:55.993   18:51:27 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init'
00:06:55.993   18:51:27 json_config -- json_config/json_config.sh@364 -- # json_config_test_init
00:06:55.993   18:51:27 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init
00:06:55.993   18:51:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:55.993   18:51:27 json_config -- common/autotest_common.sh@10 -- # set +x
00:06:55.993   18:51:27 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target
00:06:55.993   18:51:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:55.993   18:51:27 json_config -- common/autotest_common.sh@10 -- # set +x
00:06:55.993   18:51:27 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc
00:06:55.993   18:51:27 json_config -- json_config/common.sh@9 -- # local app=target
00:06:55.993   18:51:27 json_config -- json_config/common.sh@10 -- # shift
00:06:55.993   18:51:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:06:55.993   18:51:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]]
00:06:55.993   18:51:27 json_config -- json_config/common.sh@15 -- # local app_extra_params=
00:06:55.993   18:51:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:06:55.993   18:51:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:06:55.993   18:51:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=73989
00:06:55.993  Waiting for target to run...
00:06:55.993   18:51:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:06:55.993   18:51:27 json_config -- json_config/common.sh@25 -- # waitforlisten 73989 /var/tmp/spdk_tgt.sock
00:06:55.993   18:51:27 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc
00:06:55.993   18:51:27 json_config -- common/autotest_common.sh@835 -- # '[' -z 73989 ']'
00:06:55.993   18:51:27 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:06:55.993   18:51:27 json_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:55.993  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:06:55.993   18:51:27 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:06:55.993   18:51:27 json_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:55.993   18:51:27 json_config -- common/autotest_common.sh@10 -- # set +x
00:06:56.252  [2024-12-13 18:51:27.824406] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:06:56.252  [2024-12-13 18:51:27.824970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73989 ]
00:06:56.510  [2024-12-13 18:51:28.272946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:56.510  [2024-12-13 18:51:28.299859] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:06:57.077   18:51:28 json_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:57.077  
00:06:57.077   18:51:28 json_config -- common/autotest_common.sh@868 -- # return 0
00:06:57.077   18:51:28 json_config -- json_config/common.sh@26 -- # echo ''
00:06:57.078   18:51:28 json_config -- json_config/json_config.sh@276 -- # create_accel_config
00:06:57.078   18:51:28 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config
00:06:57.078   18:51:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:57.078   18:51:28 json_config -- common/autotest_common.sh@10 -- # set +x
00:06:57.078   18:51:28 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]]
00:06:57.078   18:51:28 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config
00:06:57.078   18:51:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:57.078   18:51:28 json_config -- common/autotest_common.sh@10 -- # set +x
00:06:57.336   18:51:28 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems
00:06:57.336   18:51:28 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config
00:06:57.336   18:51:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config
00:06:57.903   18:51:29 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types
00:06:57.903   18:51:29 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types
00:06:57.903   18:51:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:57.903   18:51:29 json_config -- common/autotest_common.sh@10 -- # set +x
00:06:57.903   18:51:29 json_config -- json_config/json_config.sh@45 -- # local ret=0
00:06:57.903   18:51:29 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister')
00:06:57.904   18:51:29 json_config -- json_config/json_config.sh@46 -- # local enabled_types
00:06:57.904   18:51:29 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]]
00:06:57.904   18:51:29 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister")
00:06:57.904    18:51:29 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types
00:06:57.904    18:51:29 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]'
00:06:57.904    18:51:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types
00:06:58.162   18:51:29 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister')
00:06:58.162   18:51:29 json_config -- json_config/json_config.sh@51 -- # local get_types
00:06:58.162   18:51:29 json_config -- json_config/json_config.sh@53 -- # local type_diff
00:06:58.162    18:51:29 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister
00:06:58.162    18:51:29 json_config -- json_config/json_config.sh@54 -- # sort
00:06:58.162    18:51:29 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n'
00:06:58.162    18:51:29 json_config -- json_config/json_config.sh@54 -- # uniq -u
00:06:58.162   18:51:29 json_config -- json_config/json_config.sh@54 -- # type_diff=
00:06:58.162   18:51:29 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]]
00:06:58.162   18:51:29 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types
00:06:58.162   18:51:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:58.162   18:51:29 json_config -- common/autotest_common.sh@10 -- # set +x
00:06:58.162   18:51:29 json_config -- json_config/json_config.sh@62 -- # return 0
00:06:58.162   18:51:29 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]]
00:06:58.162   18:51:29 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]]
00:06:58.162   18:51:29 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]]
00:06:58.162   18:51:29 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]]
00:06:58.162   18:51:29 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config
00:06:58.162   18:51:29 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config
00:06:58.162   18:51:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:58.162   18:51:29 json_config -- common/autotest_common.sh@10 -- # set +x
00:06:58.162   18:51:29 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1
00:06:58.162   18:51:29 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]]
00:06:58.162   18:51:29 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]]
00:06:58.162   18:51:29 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0
00:06:58.162   18:51:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0
00:06:58.420  MallocForNvmf0
00:06:58.420   18:51:30 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1
00:06:58.420   18:51:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1
00:06:58.677  MallocForNvmf1
00:06:58.677   18:51:30 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0
00:06:58.677   18:51:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0
00:06:58.935  [2024-12-13 18:51:30.580121] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:06:58.935   18:51:30 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:06:58.935   18:51:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:06:59.194   18:51:30 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0
00:06:59.194   18:51:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0
00:06:59.453   18:51:31 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1
00:06:59.453   18:51:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1
00:06:59.453   18:51:31 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420
00:06:59.453   18:51:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420
00:07:00.021  [2024-12-13 18:51:31.540718] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:07:00.021   18:51:31 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config
00:07:00.021   18:51:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:00.021   18:51:31 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:00.021   18:51:31 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target
00:07:00.021   18:51:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:00.021   18:51:31 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:00.021   18:51:31 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]]
00:07:00.021   18:51:31 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:07:00.021   18:51:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:07:00.021  MallocBdevForConfigChangeCheck
00:07:00.280   18:51:31 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init
00:07:00.280   18:51:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:00.280   18:51:31 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:00.280   18:51:31 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config
00:07:00.280   18:51:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:07:00.539  INFO: shutting down applications...
00:07:00.539   18:51:32 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...'
00:07:00.539   18:51:32 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]]
00:07:00.539   18:51:32 json_config -- json_config/json_config.sh@375 -- # json_config_clear target
00:07:00.539   18:51:32 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]]
00:07:00.539   18:51:32 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config
00:07:01.106  Calling clear_iscsi_subsystem
00:07:01.106  Calling clear_nvmf_subsystem
00:07:01.106  Calling clear_nbd_subsystem
00:07:01.106  Calling clear_ublk_subsystem
00:07:01.106  Calling clear_vhost_blk_subsystem
00:07:01.106  Calling clear_vhost_scsi_subsystem
00:07:01.106  Calling clear_bdev_subsystem
00:07:01.106   18:51:32 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py
00:07:01.106   18:51:32 json_config -- json_config/json_config.sh@350 -- # count=100
00:07:01.106   18:51:32 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']'
00:07:01.106   18:51:32 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:07:01.106   18:51:32 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters
00:07:01.106   18:51:32 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty
00:07:01.366   18:51:33 json_config -- json_config/json_config.sh@352 -- # break
00:07:01.366   18:51:33 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']'
00:07:01.366   18:51:33 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target
00:07:01.366   18:51:33 json_config -- json_config/common.sh@31 -- # local app=target
00:07:01.366   18:51:33 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:07:01.366   18:51:33 json_config -- json_config/common.sh@35 -- # [[ -n 73989 ]]
00:07:01.366   18:51:33 json_config -- json_config/common.sh@38 -- # kill -SIGINT 73989
00:07:01.366   18:51:33 json_config -- json_config/common.sh@40 -- # (( i = 0 ))
00:07:01.366   18:51:33 json_config -- json_config/common.sh@40 -- # (( i < 30 ))
00:07:01.366   18:51:33 json_config -- json_config/common.sh@41 -- # kill -0 73989
00:07:01.366   18:51:33 json_config -- json_config/common.sh@45 -- # sleep 0.5
00:07:01.938   18:51:33 json_config -- json_config/common.sh@40 -- # (( i++ ))
00:07:01.938   18:51:33 json_config -- json_config/common.sh@40 -- # (( i < 30 ))
00:07:01.938   18:51:33 json_config -- json_config/common.sh@41 -- # kill -0 73989
00:07:01.938   18:51:33 json_config -- json_config/common.sh@42 -- # app_pid["$app"]=
00:07:01.938   18:51:33 json_config -- json_config/common.sh@43 -- # break
00:07:01.938  SPDK target shutdown done
00:07:01.938   18:51:33 json_config -- json_config/common.sh@48 -- # [[ -n '' ]]
00:07:01.938   18:51:33 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:07:01.938  INFO: relaunching applications...
00:07:01.938   18:51:33 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...'
00:07:01.938   18:51:33 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:07:01.938   18:51:33 json_config -- json_config/common.sh@9 -- # local app=target
00:07:01.938   18:51:33 json_config -- json_config/common.sh@10 -- # shift
00:07:01.938   18:51:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:07:01.938   18:51:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]]
00:07:01.938   18:51:33 json_config -- json_config/common.sh@15 -- # local app_extra_params=
00:07:01.938   18:51:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:07:01.938   18:51:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:07:01.938   18:51:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=74269
00:07:01.938  Waiting for target to run...
00:07:01.938   18:51:33 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:07:01.938   18:51:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:07:01.938   18:51:33 json_config -- json_config/common.sh@25 -- # waitforlisten 74269 /var/tmp/spdk_tgt.sock
00:07:01.938   18:51:33 json_config -- common/autotest_common.sh@835 -- # '[' -z 74269 ']'
00:07:01.938   18:51:33 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:07:01.938   18:51:33 json_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:01.938  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:07:01.938   18:51:33 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:07:01.938   18:51:33 json_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:01.938   18:51:33 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:01.938  [2024-12-13 18:51:33.675532] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:01.938  [2024-12-13 18:51:33.675636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74269 ]
00:07:02.505  [2024-12-13 18:51:34.125305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:02.505  [2024-12-13 18:51:34.158896] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:02.763  [2024-12-13 18:51:34.494477] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:07:02.763  [2024-12-13 18:51:34.526546] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:07:03.022  
00:07:03.022  INFO: Checking if target configuration is the same...
00:07:03.022   18:51:34 json_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:03.022   18:51:34 json_config -- common/autotest_common.sh@868 -- # return 0
00:07:03.022   18:51:34 json_config -- json_config/common.sh@26 -- # echo ''
00:07:03.022   18:51:34 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]]
00:07:03.022   18:51:34 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...'
00:07:03.022   18:51:34 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:07:03.022    18:51:34 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config
00:07:03.022    18:51:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:07:03.022  + '[' 2 -ne 2 ']'
00:07:03.022  +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh
00:07:03.022  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../..
00:07:03.022  + rootdir=/home/vagrant/spdk_repo/spdk
00:07:03.022  +++ basename /dev/fd/62
00:07:03.022  ++ mktemp /tmp/62.XXX
00:07:03.022  + tmp_file_1=/tmp/62.7wj
00:07:03.022  +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:07:03.022  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:07:03.022  + tmp_file_2=/tmp/spdk_tgt_config.json.Hwk
00:07:03.022  + ret=0
00:07:03.022  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:07:03.281  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:07:03.538  + diff -u /tmp/62.7wj /tmp/spdk_tgt_config.json.Hwk
00:07:03.538  INFO: JSON config files are the same
00:07:03.538  + echo 'INFO: JSON config files are the same'
00:07:03.538  + rm /tmp/62.7wj /tmp/spdk_tgt_config.json.Hwk
00:07:03.538  + exit 0
00:07:03.538  INFO: changing configuration and checking if this can be detected...
00:07:03.538   18:51:35 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]]
00:07:03.538   18:51:35 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...'
00:07:03.538   18:51:35 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck
00:07:03.538   18:51:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck
00:07:03.796    18:51:35 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config
00:07:03.796   18:51:35 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:07:03.796    18:51:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:07:03.796  + '[' 2 -ne 2 ']'
00:07:03.796  +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh
00:07:03.796  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../..
00:07:03.796  + rootdir=/home/vagrant/spdk_repo/spdk
00:07:03.796  +++ basename /dev/fd/62
00:07:03.796  ++ mktemp /tmp/62.XXX
00:07:03.796  + tmp_file_1=/tmp/62.ZiK
00:07:03.796  +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:07:03.796  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:07:03.796  + tmp_file_2=/tmp/spdk_tgt_config.json.soe
00:07:03.796  + ret=0
00:07:03.796  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:07:04.055  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:07:04.313  + diff -u /tmp/62.ZiK /tmp/spdk_tgt_config.json.soe
00:07:04.313  + ret=1
00:07:04.313  + echo '=== Start of file: /tmp/62.ZiK ==='
00:07:04.313  + cat /tmp/62.ZiK
00:07:04.313  + echo '=== End of file: /tmp/62.ZiK ==='
00:07:04.313  + echo ''
00:07:04.313  + echo '=== Start of file: /tmp/spdk_tgt_config.json.soe ==='
00:07:04.313  + cat /tmp/spdk_tgt_config.json.soe
00:07:04.313  + echo '=== End of file: /tmp/spdk_tgt_config.json.soe ==='
00:07:04.313  + echo ''
00:07:04.313  + rm /tmp/62.ZiK /tmp/spdk_tgt_config.json.soe
00:07:04.313  + exit 1
00:07:04.313  INFO: configuration change detected.
00:07:04.313   18:51:35 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.'
00:07:04.313   18:51:35 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini
00:07:04.313   18:51:35 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini
00:07:04.313   18:51:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:04.313   18:51:35 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:04.313   18:51:35 json_config -- json_config/json_config.sh@314 -- # local ret=0
00:07:04.313   18:51:35 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]]
00:07:04.313   18:51:35 json_config -- json_config/json_config.sh@324 -- # [[ -n 74269 ]]
00:07:04.313   18:51:35 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config
00:07:04.313   18:51:35 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config
00:07:04.313   18:51:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:04.313   18:51:35 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:04.313   18:51:35 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]]
00:07:04.313    18:51:35 json_config -- json_config/json_config.sh@200 -- # uname -s
00:07:04.313   18:51:35 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]]
00:07:04.313   18:51:35 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio
00:07:04.313   18:51:35 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]]
00:07:04.313   18:51:35 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config
00:07:04.313   18:51:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:04.313   18:51:35 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:04.313   18:51:35 json_config -- json_config/json_config.sh@330 -- # killprocess 74269
00:07:04.313   18:51:35 json_config -- common/autotest_common.sh@954 -- # '[' -z 74269 ']'
00:07:04.313   18:51:35 json_config -- common/autotest_common.sh@958 -- # kill -0 74269
00:07:04.313    18:51:35 json_config -- common/autotest_common.sh@959 -- # uname
00:07:04.313   18:51:35 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:04.313    18:51:35 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74269
00:07:04.313  killing process with pid 74269
00:07:04.313   18:51:35 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:04.313   18:51:35 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:04.313   18:51:35 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74269'
00:07:04.313   18:51:35 json_config -- common/autotest_common.sh@973 -- # kill 74269
00:07:04.313   18:51:35 json_config -- common/autotest_common.sh@978 -- # wait 74269
00:07:04.572   18:51:36 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:07:04.572   18:51:36 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini
00:07:04.572   18:51:36 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:04.572   18:51:36 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:04.572   18:51:36 json_config -- json_config/json_config.sh@335 -- # return 0
00:07:04.572  INFO: Success
00:07:04.572   18:51:36 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success'
00:07:04.572  ************************************
00:07:04.572  END TEST json_config
00:07:04.572  ************************************
00:07:04.572  
00:07:04.572  real	0m8.692s
00:07:04.572  user	0m12.389s
00:07:04.572  sys	0m1.914s
00:07:04.572   18:51:36 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:04.572   18:51:36 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:04.572   18:51:36  -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:07:04.572   18:51:36  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:04.572   18:51:36  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:04.572   18:51:36  -- common/autotest_common.sh@10 -- # set +x
00:07:04.572  ************************************
00:07:04.572  START TEST json_config_extra_key
00:07:04.572  ************************************
00:07:04.572   18:51:36 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:07:04.572    18:51:36 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:04.572     18:51:36 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version
00:07:04.572     18:51:36 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:04.832    18:51:36 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:04.832    18:51:36 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:04.832    18:51:36 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:04.832    18:51:36 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:04.832    18:51:36 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-:
00:07:04.832    18:51:36 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1
00:07:04.832    18:51:36 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-:
00:07:04.832    18:51:36 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2
00:07:04.832    18:51:36 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<'
00:07:04.832    18:51:36 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2
00:07:04.832    18:51:36 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1
00:07:04.832    18:51:36 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:04.832    18:51:36 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in
00:07:04.832    18:51:36 json_config_extra_key -- scripts/common.sh@345 -- # : 1
00:07:04.832    18:51:36 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:04.832    18:51:36 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:04.832     18:51:36 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1
00:07:04.832     18:51:36 json_config_extra_key -- scripts/common.sh@353 -- # local d=1
00:07:04.832     18:51:36 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:04.832     18:51:36 json_config_extra_key -- scripts/common.sh@355 -- # echo 1
00:07:04.832    18:51:36 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1
00:07:04.832     18:51:36 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2
00:07:04.832     18:51:36 json_config_extra_key -- scripts/common.sh@353 -- # local d=2
00:07:04.832     18:51:36 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:04.832     18:51:36 json_config_extra_key -- scripts/common.sh@355 -- # echo 2
00:07:04.832    18:51:36 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2
00:07:04.832    18:51:36 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:04.832    18:51:36 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:04.832    18:51:36 json_config_extra_key -- scripts/common.sh@368 -- # return 0
00:07:04.832    18:51:36 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:04.832    18:51:36 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:04.832  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.832  		--rc genhtml_branch_coverage=1
00:07:04.832  		--rc genhtml_function_coverage=1
00:07:04.832  		--rc genhtml_legend=1
00:07:04.832  		--rc geninfo_all_blocks=1
00:07:04.832  		--rc geninfo_unexecuted_blocks=1
00:07:04.832  		
00:07:04.832  		'
00:07:04.832    18:51:36 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:04.832  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.832  		--rc genhtml_branch_coverage=1
00:07:04.832  		--rc genhtml_function_coverage=1
00:07:04.832  		--rc genhtml_legend=1
00:07:04.832  		--rc geninfo_all_blocks=1
00:07:04.832  		--rc geninfo_unexecuted_blocks=1
00:07:04.832  		
00:07:04.832  		'
00:07:04.832    18:51:36 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:04.832  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.832  		--rc genhtml_branch_coverage=1
00:07:04.832  		--rc genhtml_function_coverage=1
00:07:04.832  		--rc genhtml_legend=1
00:07:04.832  		--rc geninfo_all_blocks=1
00:07:04.832  		--rc geninfo_unexecuted_blocks=1
00:07:04.832  		
00:07:04.832  		'
00:07:04.832    18:51:36 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:04.832  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.832  		--rc genhtml_branch_coverage=1
00:07:04.832  		--rc genhtml_function_coverage=1
00:07:04.832  		--rc genhtml_legend=1
00:07:04.832  		--rc geninfo_all_blocks=1
00:07:04.832  		--rc geninfo_unexecuted_blocks=1
00:07:04.832  		
00:07:04.832  		'
00:07:04.832   18:51:36 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:07:04.832     18:51:36 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s
00:07:04.832    18:51:36 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:07:04.832    18:51:36 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:07:04.832    18:51:36 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:07:04.832    18:51:36 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:07:04.832    18:51:36 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:07:04.832    18:51:36 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:07:04.832    18:51:36 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:07:04.832    18:51:36 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:07:04.832    18:51:36 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:07:04.832     18:51:36 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:07:04.832    18:51:36 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:07:04.832    18:51:36 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:07:04.832    18:51:36 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:07:04.832    18:51:36 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:07:04.832    18:51:36 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:07:04.832    18:51:36 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:07:04.832    18:51:36 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:07:04.832     18:51:36 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob
00:07:04.832     18:51:36 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:04.832     18:51:36 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:04.833     18:51:36 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:04.833      18:51:36 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:04.833      18:51:36 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:04.833      18:51:36 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:04.833      18:51:36 json_config_extra_key -- paths/export.sh@5 -- # export PATH
00:07:04.833      18:51:36 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:04.833    18:51:36 json_config_extra_key -- nvmf/common.sh@51 -- # : 0
00:07:04.833    18:51:36 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:07:04.833    18:51:36 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:07:04.833    18:51:36 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:07:04.833    18:51:36 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:07:04.833    18:51:36 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:07:04.833    18:51:36 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:07:04.833  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:07:04.833    18:51:36 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:07:04.833    18:51:36 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:07:04.833    18:51:36 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0
00:07:04.833   18:51:36 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh
00:07:04.833   18:51:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='')
00:07:04.833   18:51:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid
00:07:04.833   18:51:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:07:04.833   18:51:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket
00:07:04.833   18:51:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024')
00:07:04.833   18:51:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params
00:07:04.833   18:51:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json')
00:07:04.833   18:51:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path
00:07:04.833   18:51:36 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:07:04.833  INFO: launching applications...
00:07:04.833   18:51:36 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...'
00:07:04.833   18:51:36 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:07:04.833   18:51:36 json_config_extra_key -- json_config/common.sh@9 -- # local app=target
00:07:04.833   18:51:36 json_config_extra_key -- json_config/common.sh@10 -- # shift
00:07:04.833   18:51:36 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:07:04.833   18:51:36 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]]
00:07:04.833   18:51:36 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params=
00:07:04.833   18:51:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:07:04.833   18:51:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:07:04.833   18:51:36 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=74453
00:07:04.833  Waiting for target to run...
00:07:04.833   18:51:36 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:07:04.833   18:51:36 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:07:04.833   18:51:36 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 74453 /var/tmp/spdk_tgt.sock
00:07:04.833   18:51:36 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 74453 ']'
00:07:04.833   18:51:36 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:07:04.833   18:51:36 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:04.833  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:07:04.833   18:51:36 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:07:04.833   18:51:36 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:04.833   18:51:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:07:04.833  [2024-12-13 18:51:36.557879] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:04.833  [2024-12-13 18:51:36.558000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74453 ]
00:07:05.400  [2024-12-13 18:51:36.995468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:05.400  [2024-12-13 18:51:37.020701] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:05.967   18:51:37 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:05.967   18:51:37 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0
00:07:05.967  
00:07:05.967   18:51:37 json_config_extra_key -- json_config/common.sh@26 -- # echo ''
00:07:05.967  INFO: shutting down applications...
00:07:05.967   18:51:37 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...'
00:07:05.967   18:51:37 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target
00:07:05.967   18:51:37 json_config_extra_key -- json_config/common.sh@31 -- # local app=target
00:07:05.967   18:51:37 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:07:05.967   18:51:37 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 74453 ]]
00:07:05.967   18:51:37 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 74453
00:07:05.967   18:51:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 ))
00:07:05.967   18:51:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:07:05.967   18:51:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74453
00:07:05.967   18:51:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:07:06.535   18:51:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:07:06.535   18:51:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:07:06.535   18:51:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74453
00:07:06.535   18:51:38 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]=
00:07:06.535   18:51:38 json_config_extra_key -- json_config/common.sh@43 -- # break
00:07:06.535   18:51:38 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]]
00:07:06.535  SPDK target shutdown done
00:07:06.535   18:51:38 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:07:06.535  Success
00:07:06.535   18:51:38 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success
00:07:06.535  ************************************
00:07:06.535  END TEST json_config_extra_key
00:07:06.535  ************************************
00:07:06.535  
00:07:06.535  real	0m1.782s
00:07:06.535  user	0m1.656s
00:07:06.535  sys	0m0.476s
00:07:06.535   18:51:38 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:06.535   18:51:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:07:06.535   18:51:38  -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:07:06.535   18:51:38  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:06.535   18:51:38  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:06.535   18:51:38  -- common/autotest_common.sh@10 -- # set +x
00:07:06.535  ************************************
00:07:06.535  START TEST alias_rpc
00:07:06.535  ************************************
00:07:06.535   18:51:38 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:07:06.535  * Looking for test storage...
00:07:06.535  * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc
00:07:06.535    18:51:38 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:06.535     18:51:38 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:06.535     18:51:38 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:07:06.535    18:51:38 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:06.535    18:51:38 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:06.535    18:51:38 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:06.535    18:51:38 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:06.535    18:51:38 alias_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:07:06.535    18:51:38 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:07:06.535    18:51:38 alias_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:07:06.535    18:51:38 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:07:06.535    18:51:38 alias_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:07:06.535    18:51:38 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:07:06.535    18:51:38 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:07:06.535    18:51:38 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:06.535    18:51:38 alias_rpc -- scripts/common.sh@344 -- # case "$op" in
00:07:06.535    18:51:38 alias_rpc -- scripts/common.sh@345 -- # : 1
00:07:06.535    18:51:38 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:06.535    18:51:38 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:06.535     18:51:38 alias_rpc -- scripts/common.sh@365 -- # decimal 1
00:07:06.535     18:51:38 alias_rpc -- scripts/common.sh@353 -- # local d=1
00:07:06.535     18:51:38 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:06.535     18:51:38 alias_rpc -- scripts/common.sh@355 -- # echo 1
00:07:06.535    18:51:38 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:07:06.535     18:51:38 alias_rpc -- scripts/common.sh@366 -- # decimal 2
00:07:06.535     18:51:38 alias_rpc -- scripts/common.sh@353 -- # local d=2
00:07:06.535     18:51:38 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:06.535     18:51:38 alias_rpc -- scripts/common.sh@355 -- # echo 2
00:07:06.535    18:51:38 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:07:06.535    18:51:38 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:06.535    18:51:38 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:06.535    18:51:38 alias_rpc -- scripts/common.sh@368 -- # return 0
00:07:06.535    18:51:38 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:06.535    18:51:38 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:06.535  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:06.535  		--rc genhtml_branch_coverage=1
00:07:06.535  		--rc genhtml_function_coverage=1
00:07:06.535  		--rc genhtml_legend=1
00:07:06.535  		--rc geninfo_all_blocks=1
00:07:06.535  		--rc geninfo_unexecuted_blocks=1
00:07:06.535  		
00:07:06.535  		'
00:07:06.535    18:51:38 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:06.535  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:06.535  		--rc genhtml_branch_coverage=1
00:07:06.535  		--rc genhtml_function_coverage=1
00:07:06.535  		--rc genhtml_legend=1
00:07:06.535  		--rc geninfo_all_blocks=1
00:07:06.535  		--rc geninfo_unexecuted_blocks=1
00:07:06.535  		
00:07:06.535  		'
00:07:06.535    18:51:38 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:06.535  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:06.535  		--rc genhtml_branch_coverage=1
00:07:06.535  		--rc genhtml_function_coverage=1
00:07:06.535  		--rc genhtml_legend=1
00:07:06.535  		--rc geninfo_all_blocks=1
00:07:06.535  		--rc geninfo_unexecuted_blocks=1
00:07:06.535  		
00:07:06.535  		'
00:07:06.535    18:51:38 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:06.535  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:06.535  		--rc genhtml_branch_coverage=1
00:07:06.535  		--rc genhtml_function_coverage=1
00:07:06.535  		--rc genhtml_legend=1
00:07:06.535  		--rc geninfo_all_blocks=1
00:07:06.535  		--rc geninfo_unexecuted_blocks=1
00:07:06.535  		
00:07:06.535  		'
00:07:06.535   18:51:38 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:07:06.535   18:51:38 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=74537
00:07:06.535   18:51:38 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 74537
00:07:06.535   18:51:38 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 74537 ']'
00:07:06.535   18:51:38 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:06.535   18:51:38 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:06.535  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:06.535   18:51:38 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:06.535   18:51:38 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:06.535   18:51:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:06.535   18:51:38 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:07:06.794  [2024-12-13 18:51:38.402047] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:06.795  [2024-12-13 18:51:38.402158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74537 ]
00:07:06.795  [2024-12-13 18:51:38.549467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:06.795  [2024-12-13 18:51:38.581829] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:07.053   18:51:38 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:07.053   18:51:38 alias_rpc -- common/autotest_common.sh@868 -- # return 0
00:07:07.053   18:51:38 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i
00:07:07.312   18:51:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 74537
00:07:07.312   18:51:39 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 74537 ']'
00:07:07.312   18:51:39 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 74537
00:07:07.312    18:51:39 alias_rpc -- common/autotest_common.sh@959 -- # uname
00:07:07.312   18:51:39 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:07.312    18:51:39 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74537
00:07:07.312   18:51:39 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:07.312   18:51:39 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:07.312   18:51:39 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74537'
00:07:07.312  killing process with pid 74537
00:07:07.312   18:51:39 alias_rpc -- common/autotest_common.sh@973 -- # kill 74537
00:07:07.312   18:51:39 alias_rpc -- common/autotest_common.sh@978 -- # wait 74537
00:07:07.880  
00:07:07.880  real	0m1.345s
00:07:07.880  user	0m1.361s
00:07:07.880  sys	0m0.432s
00:07:07.880   18:51:39 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:07.880   18:51:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:07.880  ************************************
00:07:07.880  END TEST alias_rpc
00:07:07.880  ************************************
00:07:07.880   18:51:39  -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]]
00:07:07.880   18:51:39  -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:07:07.880   18:51:39  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:07.880   18:51:39  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:07.880   18:51:39  -- common/autotest_common.sh@10 -- # set +x
00:07:07.880  ************************************
00:07:07.880  START TEST dpdk_mem_utility
00:07:07.880  ************************************
00:07:07.880   18:51:39 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:07:07.880  * Looking for test storage...
00:07:07.880  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility
00:07:07.880    18:51:39 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:07.880     18:51:39 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version
00:07:07.880     18:51:39 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:07.880    18:51:39 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:07.880    18:51:39 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:07.880    18:51:39 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:07.880    18:51:39 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:07.880    18:51:39 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-:
00:07:07.880    18:51:39 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1
00:07:07.880    18:51:39 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-:
00:07:07.880    18:51:39 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2
00:07:07.880    18:51:39 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<'
00:07:07.880    18:51:39 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2
00:07:07.880    18:51:39 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1
00:07:07.880    18:51:39 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:07.880    18:51:39 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in
00:07:07.880    18:51:39 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1
00:07:07.880    18:51:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:07.880    18:51:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:07.880     18:51:39 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1
00:07:07.880     18:51:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1
00:07:07.880     18:51:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:07.880     18:51:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1
00:07:07.880    18:51:39 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1
00:07:07.880     18:51:39 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2
00:07:07.880     18:51:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2
00:07:07.880     18:51:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:07.880     18:51:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2
00:07:07.880    18:51:39 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2
00:07:07.880    18:51:39 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:07.880    18:51:39 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:07.880    18:51:39 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0
00:07:07.880    18:51:39 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:07.880    18:51:39 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:07.880  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:07.880  		--rc genhtml_branch_coverage=1
00:07:07.880  		--rc genhtml_function_coverage=1
00:07:07.880  		--rc genhtml_legend=1
00:07:07.880  		--rc geninfo_all_blocks=1
00:07:07.880  		--rc geninfo_unexecuted_blocks=1
00:07:07.880  		
00:07:07.880  		'
00:07:07.880    18:51:39 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:07.880  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:07.880  		--rc genhtml_branch_coverage=1
00:07:07.880  		--rc genhtml_function_coverage=1
00:07:07.880  		--rc genhtml_legend=1
00:07:07.880  		--rc geninfo_all_blocks=1
00:07:07.880  		--rc geninfo_unexecuted_blocks=1
00:07:07.880  		
00:07:07.880  		'
00:07:07.881    18:51:39 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:07.881  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:07.881  		--rc genhtml_branch_coverage=1
00:07:07.881  		--rc genhtml_function_coverage=1
00:07:07.881  		--rc genhtml_legend=1
00:07:07.881  		--rc geninfo_all_blocks=1
00:07:07.881  		--rc geninfo_unexecuted_blocks=1
00:07:07.881  		
00:07:07.881  		'
00:07:07.881    18:51:39 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:07.881  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:07.881  		--rc genhtml_branch_coverage=1
00:07:07.881  		--rc genhtml_function_coverage=1
00:07:07.881  		--rc genhtml_legend=1
00:07:07.881  		--rc geninfo_all_blocks=1
00:07:07.881  		--rc geninfo_unexecuted_blocks=1
00:07:07.881  		
00:07:07.881  		'
00:07:07.881   18:51:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:07:07.881   18:51:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=74624
00:07:07.881   18:51:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 74624
00:07:07.881   18:51:39 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 74624 ']'
00:07:07.881   18:51:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:07:07.881   18:51:39 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:07.881   18:51:39 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:07.881  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:07.881   18:51:39 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:07.881   18:51:39 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:07.881   18:51:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:07:08.139  [2024-12-13 18:51:39.760884] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:08.139  [2024-12-13 18:51:39.760990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74624 ]
00:07:08.139  [2024-12-13 18:51:39.899700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:08.139  [2024-12-13 18:51:39.931514] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:08.396   18:51:40 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:08.396   18:51:40 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0
00:07:08.396   18:51:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:07:08.396   18:51:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:07:08.396   18:51:40 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:08.396   18:51:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:07:08.396  {
00:07:08.396  "filename": "/tmp/spdk_mem_dump.txt"
00:07:08.396  }
00:07:08.396   18:51:40 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:08.396   18:51:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:07:08.656  DPDK memory size 818.000000 MiB in 1 heap(s)
00:07:08.656  1 heaps totaling size 818.000000 MiB
00:07:08.656    size:  818.000000 MiB heap id: 0
00:07:08.656  end heaps----------
00:07:08.656  9 mempools totaling size 603.782043 MiB
00:07:08.656    size:  212.674988 MiB name: PDU_immediate_data_Pool
00:07:08.656    size:  158.602051 MiB name: PDU_data_out_Pool
00:07:08.656    size:  100.555481 MiB name: bdev_io_74624
00:07:08.656    size:   50.003479 MiB name: msgpool_74624
00:07:08.656    size:   36.509338 MiB name: fsdev_io_74624
00:07:08.656    size:   21.763794 MiB name: PDU_Pool
00:07:08.656    size:   19.513306 MiB name: SCSI_TASK_Pool
00:07:08.656    size:    4.133484 MiB name: evtpool_74624
00:07:08.656    size:    0.026123 MiB name: Session_Pool
00:07:08.656  end mempools-------
00:07:08.656  6 memzones totaling size 4.142822 MiB
00:07:08.656    size:    1.000366 MiB name: RG_ring_0_74624
00:07:08.656    size:    1.000366 MiB name: RG_ring_1_74624
00:07:08.656    size:    1.000366 MiB name: RG_ring_4_74624
00:07:08.656    size:    1.000366 MiB name: RG_ring_5_74624
00:07:08.656    size:    0.125366 MiB name: RG_ring_2_74624
00:07:08.656    size:    0.015991 MiB name: RG_ring_3_74624
00:07:08.656  end memzones-------
00:07:08.656   18:51:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0
00:07:08.656  heap id: 0 total size: 818.000000 MiB number of busy elements: 232 number of free elements: 15
00:07:08.656    list of free elements. size: 10.818054 MiB
00:07:08.656      element at address: 0x200019200000 with size:    0.999878 MiB
00:07:08.656      element at address: 0x200019400000 with size:    0.999878 MiB
00:07:08.656      element at address: 0x200000400000 with size:    0.996338 MiB
00:07:08.656      element at address: 0x200032000000 with size:    0.994446 MiB
00:07:08.656      element at address: 0x200006400000 with size:    0.959839 MiB
00:07:08.656      element at address: 0x200012c00000 with size:    0.944275 MiB
00:07:08.656      element at address: 0x200019600000 with size:    0.936584 MiB
00:07:08.656      element at address: 0x200000200000 with size:    0.717346 MiB
00:07:08.656      element at address: 0x20001ae00000 with size:    0.571533 MiB
00:07:08.656      element at address: 0x200000c00000 with size:    0.490845 MiB
00:07:08.656      element at address: 0x20000a600000 with size:    0.489441 MiB
00:07:08.656      element at address: 0x200019800000 with size:    0.485657 MiB
00:07:08.656      element at address: 0x200003e00000 with size:    0.481018 MiB
00:07:08.656      element at address: 0x200028200000 with size:    0.397583 MiB
00:07:08.656      element at address: 0x200000800000 with size:    0.353394 MiB
00:07:08.656    list of standard malloc elements. size: 199.253052 MiB
00:07:08.656      element at address: 0x20000a7fff80 with size:  132.000122 MiB
00:07:08.656      element at address: 0x2000065fff80 with size:   64.000122 MiB
00:07:08.656      element at address: 0x2000192fff80 with size:    1.000122 MiB
00:07:08.656      element at address: 0x2000194fff80 with size:    1.000122 MiB
00:07:08.656      element at address: 0x2000196fff80 with size:    1.000122 MiB
00:07:08.656      element at address: 0x2000003d9f00 with size:    0.140747 MiB
00:07:08.656      element at address: 0x2000196eff00 with size:    0.062622 MiB
00:07:08.656      element at address: 0x2000003fdf80 with size:    0.007935 MiB
00:07:08.656      element at address: 0x2000196efdc0 with size:    0.000305 MiB
00:07:08.656      element at address: 0x2000002d7c40 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000003d9e40 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000004ff100 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000004ff1c0 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000004ff280 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000004ff340 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000004ff400 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000004ff4c0 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000004ff580 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000004ff640 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000004ff700 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000004ff7c0 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000004ff880 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000004ff940 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000004ffa00 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000004ffac0 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000004ffb80 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000004ffd80 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000004ffe40 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000085a780 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000085a980 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000085ec40 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000087ef00 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000087efc0 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000087f080 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000087f140 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000087f200 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000087f2c0 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000087f380 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000087f440 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000087f500 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000087f5c0 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000087f680 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000008ff940 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000008ffb40 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7da80 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7db40 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7dc00 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7dcc0 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7dd80 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7de40 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7df00 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7dfc0 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7e080 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7e140 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7e200 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7e2c0 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7e380 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7e440 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7e500 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7e5c0 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7e680 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7e740 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7e800 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7e8c0 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7e980 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7ea40 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7eb00 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7ebc0 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7ec80 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000c7ed40 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000cff000 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200000cff0c0 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200003e7b240 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200003e7b300 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200003e7b3c0 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200003e7b480 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200003e7b540 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200003e7b600 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200003e7b6c0 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200003efb980 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000064fdd80 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000a67d4c0 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000a67d580 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000a67d640 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000a67d700 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000a67d7c0 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000a67d880 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000a67d940 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000a67da00 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000a67dac0 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20000a6fdd80 with size:    0.000183 MiB
00:07:08.656      element at address: 0x200012cf1bc0 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000196efc40 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000196efd00 with size:    0.000183 MiB
00:07:08.656      element at address: 0x2000198bc740 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20001ae92500 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20001ae925c0 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20001ae92680 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20001ae92740 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20001ae92800 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20001ae928c0 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20001ae92980 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20001ae92a40 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20001ae92b00 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20001ae92bc0 with size:    0.000183 MiB
00:07:08.656      element at address: 0x20001ae92c80 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae92d40 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae92e00 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae92ec0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae92f80 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae93040 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae93100 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae931c0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae93280 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae93340 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae93400 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae934c0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae93580 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae93640 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae93700 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae937c0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae93880 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae93940 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae93a00 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae93ac0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae93b80 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae93c40 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae93d00 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae93dc0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae93e80 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae93f40 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae94000 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae940c0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae94180 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae94240 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae94300 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae943c0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae94480 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae94540 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae94600 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae946c0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae94780 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae94840 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae94900 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae949c0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae94a80 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae94b40 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae94c00 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae94cc0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae94d80 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae94e40 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae94f00 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae94fc0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae95080 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae95140 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae95200 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae952c0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae95380 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20001ae95440 with size:    0.000183 MiB
00:07:08.657      element at address: 0x200028265c80 with size:    0.000183 MiB
00:07:08.657      element at address: 0x200028265d40 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826c940 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826cb40 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826cc00 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826ccc0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826cd80 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826ce40 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826cf00 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826cfc0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826d080 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826d140 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826d200 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826d2c0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826d380 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826d440 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826d500 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826d5c0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826d680 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826d740 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826d800 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826d8c0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826d980 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826da40 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826db00 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826dbc0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826dc80 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826dd40 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826de00 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826dec0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826df80 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826e040 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826e100 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826e1c0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826e280 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826e340 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826e400 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826e4c0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826e580 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826e640 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826e700 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826e7c0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826e880 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826e940 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826ea00 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826eac0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826eb80 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826ec40 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826ed00 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826edc0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826ee80 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826ef40 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826f000 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826f0c0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826f180 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826f240 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826f300 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826f3c0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826f480 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826f540 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826f600 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826f6c0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826f780 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826f840 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826f900 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826f9c0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826fa80 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826fb40 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826fc00 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826fcc0 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826fd80 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826fe40 with size:    0.000183 MiB
00:07:08.657      element at address: 0x20002826ff00 with size:    0.000183 MiB
00:07:08.657    list of memzone associated elements. size: 607.928894 MiB
00:07:08.657      element at address: 0x20001ae95500 with size:  211.416748 MiB
00:07:08.657        associated memzone info: size:  211.416626 MiB name: MP_PDU_immediate_data_Pool_0
00:07:08.657      element at address: 0x20002826ffc0 with size:  157.562561 MiB
00:07:08.657        associated memzone info: size:  157.562439 MiB name: MP_PDU_data_out_Pool_0
00:07:08.657      element at address: 0x200012df1e80 with size:  100.055054 MiB
00:07:08.657        associated memzone info: size:  100.054932 MiB name: MP_bdev_io_74624_0
00:07:08.657      element at address: 0x200000dff380 with size:   48.003052 MiB
00:07:08.657        associated memzone info: size:   48.002930 MiB name: MP_msgpool_74624_0
00:07:08.657      element at address: 0x200003ffdb80 with size:   36.008911 MiB
00:07:08.657        associated memzone info: size:   36.008789 MiB name: MP_fsdev_io_74624_0
00:07:08.657      element at address: 0x2000199be940 with size:   20.255554 MiB
00:07:08.657        associated memzone info: size:   20.255432 MiB name: MP_PDU_Pool_0
00:07:08.657      element at address: 0x2000321feb40 with size:   18.005066 MiB
00:07:08.657        associated memzone info: size:   18.004944 MiB name: MP_SCSI_TASK_Pool_0
00:07:08.657      element at address: 0x2000004fff00 with size:    3.000244 MiB
00:07:08.657        associated memzone info: size:    3.000122 MiB name: MP_evtpool_74624_0
00:07:08.657      element at address: 0x2000009ffe00 with size:    2.000488 MiB
00:07:08.657        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_74624
00:07:08.657      element at address: 0x2000002d7d00 with size:    1.008118 MiB
00:07:08.657        associated memzone info: size:    1.007996 MiB name: MP_evtpool_74624
00:07:08.657      element at address: 0x20000a6fde40 with size:    1.008118 MiB
00:07:08.657        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:07:08.657      element at address: 0x2000198bc800 with size:    1.008118 MiB
00:07:08.657        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:07:08.657      element at address: 0x2000064fde40 with size:    1.008118 MiB
00:07:08.657        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:07:08.657      element at address: 0x200003efba40 with size:    1.008118 MiB
00:07:08.657        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:07:08.657      element at address: 0x200000cff180 with size:    1.000488 MiB
00:07:08.658        associated memzone info: size:    1.000366 MiB name: RG_ring_0_74624
00:07:08.658      element at address: 0x2000008ffc00 with size:    1.000488 MiB
00:07:08.658        associated memzone info: size:    1.000366 MiB name: RG_ring_1_74624
00:07:08.658      element at address: 0x200012cf1c80 with size:    1.000488 MiB
00:07:08.658        associated memzone info: size:    1.000366 MiB name: RG_ring_4_74624
00:07:08.658      element at address: 0x2000320fe940 with size:    1.000488 MiB
00:07:08.658        associated memzone info: size:    1.000366 MiB name: RG_ring_5_74624
00:07:08.658      element at address: 0x20000087f740 with size:    0.500488 MiB
00:07:08.658        associated memzone info: size:    0.500366 MiB name: RG_MP_fsdev_io_74624
00:07:08.658      element at address: 0x200000c7ee00 with size:    0.500488 MiB
00:07:08.658        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_74624
00:07:08.658      element at address: 0x20000a67db80 with size:    0.500488 MiB
00:07:08.658        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:07:08.658      element at address: 0x200003e7b780 with size:    0.500488 MiB
00:07:08.658        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:07:08.658      element at address: 0x20001987c540 with size:    0.250488 MiB
00:07:08.658        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:07:08.658      element at address: 0x2000002b7a40 with size:    0.125488 MiB
00:07:08.658        associated memzone info: size:    0.125366 MiB name: RG_MP_evtpool_74624
00:07:08.658      element at address: 0x20000085ed00 with size:    0.125488 MiB
00:07:08.658        associated memzone info: size:    0.125366 MiB name: RG_ring_2_74624
00:07:08.658      element at address: 0x2000064f5b80 with size:    0.031738 MiB
00:07:08.658        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:07:08.658      element at address: 0x200028265e00 with size:    0.023743 MiB
00:07:08.658        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:07:08.658      element at address: 0x20000085aa40 with size:    0.016113 MiB
00:07:08.658        associated memzone info: size:    0.015991 MiB name: RG_ring_3_74624
00:07:08.658      element at address: 0x20002826bf40 with size:    0.002441 MiB
00:07:08.658        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:07:08.658      element at address: 0x2000004ffc40 with size:    0.000305 MiB
00:07:08.658        associated memzone info: size:    0.000183 MiB name: MP_msgpool_74624
00:07:08.658      element at address: 0x2000008ffa00 with size:    0.000305 MiB
00:07:08.658        associated memzone info: size:    0.000183 MiB name: MP_fsdev_io_74624
00:07:08.658      element at address: 0x20000085a840 with size:    0.000305 MiB
00:07:08.658        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_74624
00:07:08.658      element at address: 0x20002826ca00 with size:    0.000305 MiB
00:07:08.658        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:07:08.658   18:51:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:07:08.658   18:51:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 74624
00:07:08.658   18:51:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 74624 ']'
00:07:08.658   18:51:40 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 74624
00:07:08.658    18:51:40 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname
00:07:08.658   18:51:40 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:08.658    18:51:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74624
00:07:08.658   18:51:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:08.658   18:51:40 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:08.658   18:51:40 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74624'
00:07:08.658  killing process with pid 74624
00:07:08.658   18:51:40 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 74624
00:07:08.658   18:51:40 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 74624
00:07:08.916  
00:07:08.916  real	0m1.210s
00:07:08.916  user	0m1.182s
00:07:08.916  sys	0m0.418s
00:07:08.917   18:51:40 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:08.917   18:51:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:07:08.917  ************************************
00:07:08.917  END TEST dpdk_mem_utility
00:07:08.917  ************************************
00:07:09.176   18:51:40  -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:07:09.176   18:51:40  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:09.176   18:51:40  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:09.176   18:51:40  -- common/autotest_common.sh@10 -- # set +x
00:07:09.176  ************************************
00:07:09.176  START TEST event
00:07:09.176  ************************************
00:07:09.176   18:51:40 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:07:09.176  * Looking for test storage...
00:07:09.176  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:07:09.176    18:51:40 event -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:09.176     18:51:40 event -- common/autotest_common.sh@1711 -- # lcov --version
00:07:09.176     18:51:40 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:09.176    18:51:40 event -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:09.176    18:51:40 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:09.176    18:51:40 event -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:09.176    18:51:40 event -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:09.176    18:51:40 event -- scripts/common.sh@336 -- # IFS=.-:
00:07:09.176    18:51:40 event -- scripts/common.sh@336 -- # read -ra ver1
00:07:09.176    18:51:40 event -- scripts/common.sh@337 -- # IFS=.-:
00:07:09.176    18:51:40 event -- scripts/common.sh@337 -- # read -ra ver2
00:07:09.176    18:51:40 event -- scripts/common.sh@338 -- # local 'op=<'
00:07:09.176    18:51:40 event -- scripts/common.sh@340 -- # ver1_l=2
00:07:09.176    18:51:40 event -- scripts/common.sh@341 -- # ver2_l=1
00:07:09.176    18:51:40 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:09.176    18:51:40 event -- scripts/common.sh@344 -- # case "$op" in
00:07:09.176    18:51:40 event -- scripts/common.sh@345 -- # : 1
00:07:09.176    18:51:40 event -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:09.176    18:51:40 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:09.176     18:51:40 event -- scripts/common.sh@365 -- # decimal 1
00:07:09.176     18:51:40 event -- scripts/common.sh@353 -- # local d=1
00:07:09.176     18:51:40 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:09.176     18:51:40 event -- scripts/common.sh@355 -- # echo 1
00:07:09.176    18:51:40 event -- scripts/common.sh@365 -- # ver1[v]=1
00:07:09.176     18:51:40 event -- scripts/common.sh@366 -- # decimal 2
00:07:09.176     18:51:40 event -- scripts/common.sh@353 -- # local d=2
00:07:09.176     18:51:40 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:09.176     18:51:40 event -- scripts/common.sh@355 -- # echo 2
00:07:09.176    18:51:40 event -- scripts/common.sh@366 -- # ver2[v]=2
00:07:09.176    18:51:40 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:09.176    18:51:40 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:09.176    18:51:40 event -- scripts/common.sh@368 -- # return 0
00:07:09.176    18:51:40 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:09.176    18:51:40 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:09.176  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.176  		--rc genhtml_branch_coverage=1
00:07:09.176  		--rc genhtml_function_coverage=1
00:07:09.176  		--rc genhtml_legend=1
00:07:09.176  		--rc geninfo_all_blocks=1
00:07:09.176  		--rc geninfo_unexecuted_blocks=1
00:07:09.176  		
00:07:09.176  		'
00:07:09.176    18:51:40 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:09.176  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.176  		--rc genhtml_branch_coverage=1
00:07:09.176  		--rc genhtml_function_coverage=1
00:07:09.176  		--rc genhtml_legend=1
00:07:09.176  		--rc geninfo_all_blocks=1
00:07:09.176  		--rc geninfo_unexecuted_blocks=1
00:07:09.176  		
00:07:09.176  		'
00:07:09.176    18:51:40 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:09.176  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.176  		--rc genhtml_branch_coverage=1
00:07:09.176  		--rc genhtml_function_coverage=1
00:07:09.176  		--rc genhtml_legend=1
00:07:09.176  		--rc geninfo_all_blocks=1
00:07:09.176  		--rc geninfo_unexecuted_blocks=1
00:07:09.176  		
00:07:09.176  		'
00:07:09.176    18:51:40 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:09.176  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.176  		--rc genhtml_branch_coverage=1
00:07:09.176  		--rc genhtml_function_coverage=1
00:07:09.176  		--rc genhtml_legend=1
00:07:09.176  		--rc geninfo_all_blocks=1
00:07:09.176  		--rc geninfo_unexecuted_blocks=1
00:07:09.176  		
00:07:09.176  		'
00:07:09.176   18:51:40 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:07:09.176    18:51:40 event -- bdev/nbd_common.sh@6 -- # set -e
00:07:09.176   18:51:40 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:07:09.176   18:51:40 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:07:09.176   18:51:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:09.176   18:51:40 event -- common/autotest_common.sh@10 -- # set +x
00:07:09.176  ************************************
00:07:09.176  START TEST event_perf
00:07:09.176  ************************************
00:07:09.176   18:51:40 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:07:09.435  Running I/O for 1 seconds...[2024-12-13 18:51:41.005096] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:09.435  [2024-12-13 18:51:41.005254] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74708 ]
00:07:09.435  [2024-12-13 18:51:41.148593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:09.435  [2024-12-13 18:51:41.182346] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:07:09.435  [2024-12-13 18:51:41.182487] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:07:09.435  [2024-12-13 18:51:41.182633] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:09.435  [2024-12-13 18:51:41.182635] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:07:10.811  Running I/O for 1 seconds...
00:07:10.811  lcore  0:   210506
00:07:10.811  lcore  1:   210505
00:07:10.811  lcore  2:   210506
00:07:10.811  lcore  3:   210507
00:07:10.811  done.
00:07:10.811  
00:07:10.811  real	0m1.234s
00:07:10.811  user	0m4.062s
00:07:10.811  sys	0m0.054s
00:07:10.811   18:51:42 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:10.811   18:51:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x
00:07:10.811  ************************************
00:07:10.811  END TEST event_perf
00:07:10.811  ************************************
00:07:10.811   18:51:42 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:07:10.811   18:51:42 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:07:10.811   18:51:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:10.811   18:51:42 event -- common/autotest_common.sh@10 -- # set +x
00:07:10.811  ************************************
00:07:10.811  START TEST event_reactor
00:07:10.811  ************************************
00:07:10.811   18:51:42 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:07:10.811  [2024-12-13 18:51:42.285433] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:10.811  [2024-12-13 18:51:42.285552] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74746 ]
00:07:10.811  [2024-12-13 18:51:42.425554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:10.811  [2024-12-13 18:51:42.460322] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:11.746  test_start
00:07:11.746  oneshot
00:07:11.746  tick 100
00:07:11.746  tick 100
00:07:11.746  tick 250
00:07:11.746  tick 100
00:07:11.746  tick 100
00:07:11.746  tick 100
00:07:11.746  tick 250
00:07:11.746  tick 500
00:07:11.746  tick 100
00:07:11.746  tick 100
00:07:11.746  tick 250
00:07:11.746  tick 100
00:07:11.746  tick 100
00:07:11.746  test_end
00:07:11.746  
00:07:11.746  real	0m1.227s
00:07:11.746  user	0m1.087s
00:07:11.746  sys	0m0.036s
00:07:11.746   18:51:43 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:11.746   18:51:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x
00:07:11.746  ************************************
00:07:11.746  END TEST event_reactor
00:07:11.746  ************************************
00:07:11.746   18:51:43 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:07:11.746   18:51:43 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:07:11.746   18:51:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:11.746   18:51:43 event -- common/autotest_common.sh@10 -- # set +x
00:07:11.746  ************************************
00:07:11.746  START TEST event_reactor_perf
00:07:11.746  ************************************
00:07:11.746   18:51:43 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:07:11.746  [2024-12-13 18:51:43.565511] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:11.746  [2024-12-13 18:51:43.566097] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74782 ]
00:07:12.005  [2024-12-13 18:51:43.709137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:12.005  [2024-12-13 18:51:43.740317] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:13.018  test_start
00:07:13.018  test_end
00:07:13.018  Performance:   478782 events per second
00:07:13.018  
00:07:13.018  real	0m1.224s
00:07:13.018  user	0m1.081s
00:07:13.018  sys	0m0.036s
00:07:13.018   18:51:44 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:13.018   18:51:44 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x
00:07:13.018  ************************************
00:07:13.018  END TEST event_reactor_perf
00:07:13.018  ************************************
00:07:13.018    18:51:44 event -- event/event.sh@49 -- # uname -s
00:07:13.018   18:51:44 event -- event/event.sh@49 -- # '[' Linux = Linux ']'
00:07:13.018   18:51:44 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:07:13.018   18:51:44 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:13.018   18:51:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:13.018   18:51:44 event -- common/autotest_common.sh@10 -- # set +x
00:07:13.018  ************************************
00:07:13.018  START TEST event_scheduler
00:07:13.018  ************************************
00:07:13.018   18:51:44 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:07:13.277  * Looking for test storage...
00:07:13.277  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler
00:07:13.277    18:51:44 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:13.277     18:51:44 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version
00:07:13.277     18:51:44 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:13.277    18:51:44 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:13.277    18:51:44 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:13.277    18:51:44 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:13.277    18:51:44 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:13.277    18:51:44 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-:
00:07:13.277    18:51:44 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1
00:07:13.277    18:51:44 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-:
00:07:13.277    18:51:44 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2
00:07:13.277    18:51:44 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<'
00:07:13.277    18:51:44 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2
00:07:13.277    18:51:44 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1
00:07:13.277    18:51:44 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:13.277    18:51:44 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in
00:07:13.277    18:51:44 event.event_scheduler -- scripts/common.sh@345 -- # : 1
00:07:13.277    18:51:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:13.277    18:51:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:13.277     18:51:44 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1
00:07:13.277     18:51:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=1
00:07:13.277     18:51:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:13.277     18:51:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 1
00:07:13.277    18:51:45 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1
00:07:13.277     18:51:45 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2
00:07:13.277     18:51:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=2
00:07:13.277     18:51:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:13.277     18:51:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 2
00:07:13.277    18:51:45 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2
00:07:13.277    18:51:45 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:13.277    18:51:45 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:13.277    18:51:45 event.event_scheduler -- scripts/common.sh@368 -- # return 0
00:07:13.277    18:51:45 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:13.277    18:51:45 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:13.277  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:13.277  		--rc genhtml_branch_coverage=1
00:07:13.277  		--rc genhtml_function_coverage=1
00:07:13.277  		--rc genhtml_legend=1
00:07:13.277  		--rc geninfo_all_blocks=1
00:07:13.277  		--rc geninfo_unexecuted_blocks=1
00:07:13.277  		
00:07:13.277  		'
00:07:13.277    18:51:45 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:13.277  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:13.277  		--rc genhtml_branch_coverage=1
00:07:13.277  		--rc genhtml_function_coverage=1
00:07:13.277  		--rc genhtml_legend=1
00:07:13.277  		--rc geninfo_all_blocks=1
00:07:13.277  		--rc geninfo_unexecuted_blocks=1
00:07:13.277  		
00:07:13.277  		'
00:07:13.277    18:51:45 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:13.277  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:13.277  		--rc genhtml_branch_coverage=1
00:07:13.277  		--rc genhtml_function_coverage=1
00:07:13.277  		--rc genhtml_legend=1
00:07:13.277  		--rc geninfo_all_blocks=1
00:07:13.277  		--rc geninfo_unexecuted_blocks=1
00:07:13.277  		
00:07:13.277  		'
00:07:13.277    18:51:45 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:13.277  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:13.277  		--rc genhtml_branch_coverage=1
00:07:13.277  		--rc genhtml_function_coverage=1
00:07:13.277  		--rc genhtml_legend=1
00:07:13.277  		--rc geninfo_all_blocks=1
00:07:13.277  		--rc geninfo_unexecuted_blocks=1
00:07:13.277  		
00:07:13.277  		'
00:07:13.277   18:51:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd
00:07:13.277   18:51:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=74846
00:07:13.277   18:51:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT
00:07:13.277   18:51:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 74846
00:07:13.277   18:51:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f
00:07:13.277   18:51:45 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 74846 ']'
00:07:13.277   18:51:45 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:13.277   18:51:45 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:13.277  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:13.277   18:51:45 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:13.277   18:51:45 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:13.277   18:51:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:07:13.277  [2024-12-13 18:51:45.073681] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:13.277  [2024-12-13 18:51:45.073809] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74846 ]
00:07:13.536  [2024-12-13 18:51:45.225255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:13.536  [2024-12-13 18:51:45.268389] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:13.536  [2024-12-13 18:51:45.268482] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:07:13.536  [2024-12-13 18:51:45.268596] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:07:13.536  [2024-12-13 18:51:45.268604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:07:13.536   18:51:45 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:13.536   18:51:45 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0
00:07:13.536   18:51:45 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic
00:07:13.536   18:51:45 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:13.536   18:51:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:07:13.536  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:07:13.536  POWER: Cannot set governor of lcore 0 to userspace
00:07:13.536  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:07:13.536  POWER: Cannot set governor of lcore 0 to performance
00:07:13.536  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:07:13.536  POWER: Cannot set governor of lcore 0 to userspace
00:07:13.536  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:07:13.536  POWER: Cannot set governor of lcore 0 to userspace
00:07:13.536  GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory
00:07:13.536  POWER: Unable to set Power Management Environment for lcore 0
00:07:13.536  [2024-12-13 18:51:45.318409] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0
00:07:13.536  [2024-12-13 18:51:45.318424] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0
00:07:13.536  [2024-12-13 18:51:45.318435] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor
00:07:13.536  [2024-12-13 18:51:45.318448] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:07:13.536  [2024-12-13 18:51:45.318458] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:07:13.536  [2024-12-13 18:51:45.318467] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:07:13.536   18:51:45 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:13.536   18:51:45 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init
00:07:13.536   18:51:45 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:13.536   18:51:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:07:13.794  [2024-12-13 18:51:45.418434] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:07:13.794   18:51:45 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:13.794   18:51:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread
00:07:13.794   18:51:45 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:13.794   18:51:45 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:13.794   18:51:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:07:13.794  ************************************
00:07:13.794  START TEST scheduler_create_thread
00:07:13.794  ************************************
00:07:13.794   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread
00:07:13.794   18:51:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100
00:07:13.794   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:13.794   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:13.794  2
00:07:13.794   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:13.794   18:51:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100
00:07:13.794   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:13.794   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:13.794  3
00:07:13.794   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:13.794   18:51:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100
00:07:13.794   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:13.794   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:13.794  4
00:07:13.794   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:13.794   18:51:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100
00:07:13.794   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:13.794   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:13.794  5
00:07:13.794   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:13.794   18:51:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0
00:07:13.794   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:13.794   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:13.794  6
00:07:13.794   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:13.794   18:51:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0
00:07:13.795   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:13.795   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:13.795  7
00:07:13.795   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:13.795   18:51:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0
00:07:13.795   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:13.795   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:13.795  8
00:07:13.795   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:13.795   18:51:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0
00:07:13.795   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:13.795   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:13.795  9
00:07:13.795   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:13.795   18:51:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30
00:07:13.795   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:13.795   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:13.795  10
00:07:13.795   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:13.795    18:51:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0
00:07:13.795    18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:13.795    18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:13.795    18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:13.795   18:51:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11
00:07:13.795   18:51:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50
00:07:13.795   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:13.795   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:13.795   18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:13.795    18:51:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100
00:07:13.795    18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:13.795    18:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:15.171    18:51:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:15.171   18:51:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12
00:07:15.171   18:51:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12
00:07:15.171   18:51:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:15.171   18:51:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:16.547   18:51:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:16.547  
00:07:16.547  real	0m2.611s
00:07:16.547  user	0m0.019s
00:07:16.547  sys	0m0.003s
00:07:16.547   18:51:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:16.547  ************************************
00:07:16.547  END TEST scheduler_create_thread
00:07:16.547  ************************************
00:07:16.547   18:51:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:16.547   18:51:48 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:07:16.547   18:51:48 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 74846
00:07:16.547   18:51:48 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 74846 ']'
00:07:16.547   18:51:48 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 74846
00:07:16.547    18:51:48 event.event_scheduler -- common/autotest_common.sh@959 -- # uname
00:07:16.547   18:51:48 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:16.547    18:51:48 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74846
00:07:16.547   18:51:48 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:07:16.547   18:51:48 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:07:16.547  killing process with pid 74846
00:07:16.547   18:51:48 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74846'
00:07:16.547   18:51:48 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 74846
00:07:16.547   18:51:48 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 74846
00:07:16.806  [2024-12-13 18:51:48.521560] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:07:17.065  
00:07:17.065  real	0m3.881s
00:07:17.065  user	0m5.673s
00:07:17.065  sys	0m0.373s
00:07:17.065   18:51:48 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:17.065   18:51:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:07:17.065  ************************************
00:07:17.065  END TEST event_scheduler
00:07:17.065  ************************************
00:07:17.065   18:51:48 event -- event/event.sh@51 -- # modprobe -n nbd
00:07:17.065   18:51:48 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test
00:07:17.065   18:51:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:17.065   18:51:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:17.065   18:51:48 event -- common/autotest_common.sh@10 -- # set +x
00:07:17.065  ************************************
00:07:17.065  START TEST app_repeat
00:07:17.065  ************************************
00:07:17.065   18:51:48 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test
00:07:17.065   18:51:48 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:17.065   18:51:48 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:17.065   18:51:48 event.app_repeat -- event/event.sh@13 -- # local nbd_list
00:07:17.066   18:51:48 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:17.066   18:51:48 event.app_repeat -- event/event.sh@14 -- # local bdev_list
00:07:17.066   18:51:48 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4
00:07:17.066   18:51:48 event.app_repeat -- event/event.sh@17 -- # modprobe nbd
00:07:17.066   18:51:48 event.app_repeat -- event/event.sh@19 -- # repeat_pid=74951
00:07:17.066   18:51:48 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4
00:07:17.066   18:51:48 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT
00:07:17.066  Process app_repeat pid: 74951
00:07:17.066   18:51:48 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 74951'
00:07:17.066   18:51:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:07:17.066  spdk_app_start Round 0
00:07:17.066   18:51:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0'
00:07:17.066   18:51:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74951 /var/tmp/spdk-nbd.sock
00:07:17.066   18:51:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 74951 ']'
00:07:17.066   18:51:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:17.066   18:51:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:17.066  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:17.066   18:51:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:17.066   18:51:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:17.066   18:51:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:17.066  [2024-12-13 18:51:48.798632] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:17.066  [2024-12-13 18:51:48.798716] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74951 ]
00:07:17.325  [2024-12-13 18:51:48.938172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:17.325  [2024-12-13 18:51:48.974647] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:07:17.325  [2024-12-13 18:51:48.974654] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:17.325   18:51:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:17.325   18:51:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:07:17.325   18:51:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:17.583  Malloc0
00:07:17.842   18:51:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:17.842  Malloc1
00:07:18.101   18:51:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:18.101   18:51:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:18.101   18:51:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:18.101   18:51:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:07:18.101   18:51:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:18.101   18:51:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:07:18.101   18:51:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:18.101   18:51:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:18.101   18:51:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:18.101   18:51:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:07:18.101   18:51:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:18.101   18:51:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:07:18.101   18:51:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:07:18.101   18:51:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:07:18.101   18:51:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:18.101   18:51:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:07:18.101  /dev/nbd0
00:07:18.360    18:51:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:07:18.360   18:51:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:07:18.360   18:51:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:07:18.360   18:51:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:18.360   18:51:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:18.360   18:51:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:18.360   18:51:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:07:18.360   18:51:49 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:18.360   18:51:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:18.360   18:51:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:18.360   18:51:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:18.360  1+0 records in
00:07:18.360  1+0 records out
00:07:18.360  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304315 s, 13.5 MB/s
00:07:18.360    18:51:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:18.360   18:51:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:18.360   18:51:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:18.360   18:51:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:18.360   18:51:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:18.360   18:51:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:18.360   18:51:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:18.360   18:51:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:07:18.618  /dev/nbd1
00:07:18.619    18:51:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:07:18.619   18:51:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:07:18.619   18:51:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:07:18.619   18:51:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:18.619   18:51:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:18.619   18:51:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:18.619   18:51:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:07:18.619   18:51:50 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:18.619   18:51:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:18.619   18:51:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:18.619   18:51:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:18.619  1+0 records in
00:07:18.619  1+0 records out
00:07:18.619  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260188 s, 15.7 MB/s
00:07:18.619    18:51:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:18.619   18:51:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:18.619   18:51:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:18.619   18:51:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:18.619   18:51:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:18.619   18:51:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:18.619   18:51:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:18.619    18:51:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:18.619    18:51:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:18.619     18:51:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:18.877    18:51:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:07:18.877    {
00:07:18.877      "bdev_name": "Malloc0",
00:07:18.877      "nbd_device": "/dev/nbd0"
00:07:18.877    },
00:07:18.877    {
00:07:18.877      "bdev_name": "Malloc1",
00:07:18.877      "nbd_device": "/dev/nbd1"
00:07:18.877    }
00:07:18.877  ]'
00:07:18.877     18:51:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:07:18.877    {
00:07:18.877      "bdev_name": "Malloc0",
00:07:18.877      "nbd_device": "/dev/nbd0"
00:07:18.877    },
00:07:18.877    {
00:07:18.877      "bdev_name": "Malloc1",
00:07:18.877      "nbd_device": "/dev/nbd1"
00:07:18.877    }
00:07:18.877  ]'
00:07:18.877     18:51:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:18.877    18:51:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:07:18.877  /dev/nbd1'
00:07:18.877     18:51:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:07:18.877  /dev/nbd1'
00:07:18.877     18:51:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:18.877    18:51:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:07:18.877    18:51:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:07:18.877   18:51:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:07:18.877   18:51:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:07:18.877   18:51:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:07:18.877   18:51:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:18.877   18:51:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:18.877   18:51:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:07:18.878   18:51:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:18.878   18:51:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:07:18.878   18:51:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:07:18.878  256+0 records in
00:07:18.878  256+0 records out
00:07:18.878  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00749755 s, 140 MB/s
00:07:18.878   18:51:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:18.878   18:51:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:07:19.136  256+0 records in
00:07:19.136  256+0 records out
00:07:19.136  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214202 s, 49.0 MB/s
00:07:19.136   18:51:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:19.136   18:51:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:07:19.136  256+0 records in
00:07:19.136  256+0 records out
00:07:19.136  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267893 s, 39.1 MB/s
00:07:19.136   18:51:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:07:19.136   18:51:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:19.136   18:51:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:19.136   18:51:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:07:19.136   18:51:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:19.136   18:51:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:07:19.136   18:51:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:07:19.137   18:51:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:19.137   18:51:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:07:19.137   18:51:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:19.137   18:51:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:07:19.137   18:51:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:19.137   18:51:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:07:19.137   18:51:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:19.137   18:51:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:19.137   18:51:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:07:19.137   18:51:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:07:19.137   18:51:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:19.137   18:51:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:07:19.396    18:51:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:07:19.396   18:51:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:07:19.396   18:51:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:07:19.396   18:51:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:19.396   18:51:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:19.396   18:51:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:07:19.396   18:51:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:19.396   18:51:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:19.396   18:51:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:19.396   18:51:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:07:19.654    18:51:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:07:19.654   18:51:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:07:19.654   18:51:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:07:19.654   18:51:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:19.655   18:51:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:19.655   18:51:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:07:19.655   18:51:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:19.655   18:51:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:19.655    18:51:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:19.655    18:51:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:19.655     18:51:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:19.913    18:51:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:07:19.913     18:51:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:07:19.913     18:51:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:20.172    18:51:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:07:20.172     18:51:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:07:20.172     18:51:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:20.172     18:51:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:07:20.172    18:51:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:07:20.172    18:51:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:07:20.172   18:51:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:07:20.172   18:51:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:07:20.172   18:51:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:07:20.172   18:51:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:07:20.431   18:51:52 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:07:20.431  [2024-12-13 18:51:52.143884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:20.431  [2024-12-13 18:51:52.168088] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:07:20.431  [2024-12-13 18:51:52.168096] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:20.431  [2024-12-13 18:51:52.220662] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:07:20.431  [2024-12-13 18:51:52.220747] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:07:23.716   18:51:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:07:23.716  spdk_app_start Round 1
00:07:23.716   18:51:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1'
00:07:23.716   18:51:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74951 /var/tmp/spdk-nbd.sock
00:07:23.716   18:51:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 74951 ']'
00:07:23.716   18:51:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:23.716   18:51:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:23.716  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:23.716   18:51:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:23.716   18:51:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:23.716   18:51:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:23.716   18:51:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:23.716   18:51:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:07:23.716   18:51:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:23.716  Malloc0
00:07:23.716   18:51:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:23.975  Malloc1
00:07:23.975   18:51:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:23.975   18:51:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:23.975   18:51:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:23.975   18:51:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:07:23.975   18:51:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:23.975   18:51:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:07:23.975   18:51:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:23.975   18:51:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:23.975   18:51:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:23.975   18:51:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:07:23.975   18:51:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:23.975   18:51:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:07:23.975   18:51:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:07:23.975   18:51:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:07:23.975   18:51:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:23.975   18:51:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:07:24.234  /dev/nbd0
00:07:24.234    18:51:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:07:24.234   18:51:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:07:24.234   18:51:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:07:24.234   18:51:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:24.234   18:51:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:24.234   18:51:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:24.234   18:51:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:07:24.234   18:51:56 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:24.234   18:51:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:24.234   18:51:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:24.234   18:51:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:24.234  1+0 records in
00:07:24.234  1+0 records out
00:07:24.234  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258832 s, 15.8 MB/s
00:07:24.234    18:51:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:24.234   18:51:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:24.234   18:51:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:24.234   18:51:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:24.234   18:51:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:24.234   18:51:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:24.234   18:51:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:24.234   18:51:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:07:24.493  /dev/nbd1
00:07:24.751    18:51:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:07:24.751   18:51:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:07:24.751   18:51:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:07:24.751   18:51:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:24.751   18:51:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:24.751   18:51:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:24.751   18:51:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:07:24.751   18:51:56 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:24.751   18:51:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:24.751   18:51:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:24.751   18:51:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:24.751  1+0 records in
00:07:24.751  1+0 records out
00:07:24.751  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229669 s, 17.8 MB/s
00:07:24.751    18:51:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:24.751   18:51:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:24.751   18:51:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:24.751   18:51:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:24.751   18:51:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:24.751   18:51:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:24.751   18:51:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:24.751    18:51:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:24.751    18:51:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:24.752     18:51:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:25.010    18:51:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:07:25.010    {
00:07:25.010      "bdev_name": "Malloc0",
00:07:25.010      "nbd_device": "/dev/nbd0"
00:07:25.010    },
00:07:25.010    {
00:07:25.010      "bdev_name": "Malloc1",
00:07:25.010      "nbd_device": "/dev/nbd1"
00:07:25.010    }
00:07:25.010  ]'
00:07:25.010     18:51:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:07:25.010    {
00:07:25.010      "bdev_name": "Malloc0",
00:07:25.010      "nbd_device": "/dev/nbd0"
00:07:25.010    },
00:07:25.010    {
00:07:25.010      "bdev_name": "Malloc1",
00:07:25.010      "nbd_device": "/dev/nbd1"
00:07:25.010    }
00:07:25.010  ]'
00:07:25.010     18:51:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:25.010    18:51:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:07:25.010  /dev/nbd1'
00:07:25.010     18:51:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:07:25.010  /dev/nbd1'
00:07:25.010     18:51:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:25.010    18:51:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:07:25.010    18:51:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:07:25.010   18:51:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:07:25.010   18:51:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:07:25.010   18:51:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:07:25.010   18:51:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:25.010   18:51:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:25.010   18:51:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:07:25.010   18:51:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:25.010   18:51:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:07:25.010   18:51:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:07:25.010  256+0 records in
00:07:25.010  256+0 records out
00:07:25.010  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105663 s, 99.2 MB/s
00:07:25.010   18:51:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:25.010   18:51:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:07:25.010  256+0 records in
00:07:25.010  256+0 records out
00:07:25.010  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223298 s, 47.0 MB/s
00:07:25.010   18:51:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:25.010   18:51:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:07:25.010  256+0 records in
00:07:25.010  256+0 records out
00:07:25.010  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297649 s, 35.2 MB/s
00:07:25.010   18:51:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:07:25.011   18:51:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:25.011   18:51:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:25.011   18:51:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:07:25.011   18:51:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:25.011   18:51:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:07:25.011   18:51:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:07:25.011   18:51:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:25.011   18:51:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:07:25.011   18:51:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:25.011   18:51:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:07:25.011   18:51:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:25.011   18:51:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:07:25.011   18:51:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:25.011   18:51:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:25.011   18:51:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:07:25.011   18:51:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:07:25.011   18:51:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:25.011   18:51:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:07:25.270    18:51:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:07:25.270   18:51:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:07:25.270   18:51:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:07:25.270   18:51:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:25.270   18:51:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:25.270   18:51:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:07:25.270   18:51:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:25.270   18:51:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:25.270   18:51:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:25.270   18:51:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:07:25.836    18:51:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:07:25.836   18:51:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:07:25.836   18:51:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:07:25.836   18:51:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:25.836   18:51:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:25.836   18:51:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:07:25.836   18:51:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:25.836   18:51:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:25.836    18:51:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:25.836    18:51:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:25.837     18:51:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:26.095    18:51:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:07:26.095     18:51:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:07:26.095     18:51:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:26.096    18:51:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:07:26.096     18:51:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:07:26.096     18:51:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:26.096     18:51:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:07:26.096    18:51:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:07:26.096    18:51:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:07:26.096   18:51:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:07:26.096   18:51:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:07:26.096   18:51:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:07:26.096   18:51:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:07:26.354   18:51:58 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:07:26.613  [2024-12-13 18:51:58.185319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:26.613  [2024-12-13 18:51:58.209369] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:07:26.613  [2024-12-13 18:51:58.209381] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:26.613  [2024-12-13 18:51:58.263331] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:07:26.613  [2024-12-13 18:51:58.263417] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:07:29.899   18:52:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:07:29.899   18:52:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2'
00:07:29.899  spdk_app_start Round 2
00:07:29.899   18:52:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74951 /var/tmp/spdk-nbd.sock
00:07:29.899   18:52:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 74951 ']'
00:07:29.899   18:52:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:29.899   18:52:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:29.899  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:29.899   18:52:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:29.899   18:52:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:29.899   18:52:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:29.899   18:52:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:29.899   18:52:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:07:29.899   18:52:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:29.899  Malloc0
00:07:29.899   18:52:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:30.158  Malloc1
00:07:30.158   18:52:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:30.158   18:52:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:30.158   18:52:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:30.158   18:52:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:07:30.158   18:52:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:30.158   18:52:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:07:30.158   18:52:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:30.158   18:52:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:30.158   18:52:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:30.158   18:52:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:07:30.158   18:52:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:30.158   18:52:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:07:30.158   18:52:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:07:30.158   18:52:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:07:30.158   18:52:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:30.158   18:52:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:07:30.417  /dev/nbd0
00:07:30.417    18:52:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:07:30.417   18:52:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:07:30.417   18:52:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:07:30.417   18:52:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:30.417   18:52:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:30.417   18:52:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:30.417   18:52:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:07:30.417   18:52:02 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:30.417   18:52:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:30.417   18:52:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:30.417   18:52:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:30.417  1+0 records in
00:07:30.417  1+0 records out
00:07:30.417  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283244 s, 14.5 MB/s
00:07:30.417    18:52:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:30.417   18:52:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:30.417   18:52:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:30.417   18:52:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:30.417   18:52:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:30.417   18:52:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:30.417   18:52:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:30.417   18:52:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:07:30.676  /dev/nbd1
00:07:30.935    18:52:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:07:30.935   18:52:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:07:30.935   18:52:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:07:30.935   18:52:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:30.935   18:52:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:30.935   18:52:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:30.935   18:52:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:07:30.935   18:52:02 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:30.935   18:52:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:30.935   18:52:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:30.935   18:52:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:30.935  1+0 records in
00:07:30.935  1+0 records out
00:07:30.935  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277607 s, 14.8 MB/s
00:07:30.935    18:52:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:30.935   18:52:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:30.935   18:52:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:30.935   18:52:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:30.935   18:52:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:30.935   18:52:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:30.935   18:52:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:30.935    18:52:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:30.935    18:52:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:30.935     18:52:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:31.193    18:52:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:07:31.193    {
00:07:31.193      "bdev_name": "Malloc0",
00:07:31.193      "nbd_device": "/dev/nbd0"
00:07:31.193    },
00:07:31.193    {
00:07:31.193      "bdev_name": "Malloc1",
00:07:31.193      "nbd_device": "/dev/nbd1"
00:07:31.193    }
00:07:31.193  ]'
00:07:31.193     18:52:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:07:31.193    {
00:07:31.193      "bdev_name": "Malloc0",
00:07:31.193      "nbd_device": "/dev/nbd0"
00:07:31.193    },
00:07:31.193    {
00:07:31.193      "bdev_name": "Malloc1",
00:07:31.193      "nbd_device": "/dev/nbd1"
00:07:31.193    }
00:07:31.193  ]'
00:07:31.193     18:52:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:31.193    18:52:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:07:31.193  /dev/nbd1'
00:07:31.193     18:52:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:07:31.193  /dev/nbd1'
00:07:31.193     18:52:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:31.193    18:52:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:07:31.194    18:52:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:07:31.194  256+0 records in
00:07:31.194  256+0 records out
00:07:31.194  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00896858 s, 117 MB/s
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:07:31.194  256+0 records in
00:07:31.194  256+0 records out
00:07:31.194  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235973 s, 44.4 MB/s
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:07:31.194  256+0 records in
00:07:31.194  256+0 records out
00:07:31.194  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212278 s, 49.4 MB/s
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:31.194   18:52:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:07:31.452    18:52:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:07:31.453   18:52:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:07:31.453   18:52:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:07:31.453   18:52:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:31.453   18:52:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:31.453   18:52:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:07:31.453   18:52:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:31.453   18:52:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:31.453   18:52:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:31.453   18:52:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:07:31.711    18:52:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:07:31.711   18:52:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:07:31.711   18:52:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:07:31.711   18:52:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:31.711   18:52:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:31.711   18:52:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:07:31.711   18:52:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:31.711   18:52:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:31.711    18:52:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:31.711    18:52:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:31.711     18:52:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:31.969    18:52:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:07:31.969     18:52:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:07:31.969     18:52:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:32.228    18:52:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:07:32.228     18:52:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:07:32.228     18:52:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:32.228     18:52:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:07:32.228    18:52:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:07:32.228    18:52:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:07:32.228   18:52:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:07:32.228   18:52:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:07:32.228   18:52:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:07:32.228   18:52:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:07:32.487   18:52:04 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:07:32.746  [2024-12-13 18:52:04.320776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:32.746  [2024-12-13 18:52:04.344885] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:07:32.746  [2024-12-13 18:52:04.344897] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:32.746  [2024-12-13 18:52:04.397581] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:07:32.746  [2024-12-13 18:52:04.397679] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:07:36.032   18:52:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 74951 /var/tmp/spdk-nbd.sock
00:07:36.032   18:52:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 74951 ']'
00:07:36.032   18:52:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:36.032   18:52:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:36.032  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:36.032   18:52:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:36.032   18:52:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:36.032   18:52:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:36.033   18:52:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:36.033   18:52:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:07:36.033   18:52:07 event.app_repeat -- event/event.sh@39 -- # killprocess 74951
00:07:36.033   18:52:07 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 74951 ']'
00:07:36.033   18:52:07 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 74951
00:07:36.033    18:52:07 event.app_repeat -- common/autotest_common.sh@959 -- # uname
00:07:36.033   18:52:07 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:36.033    18:52:07 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74951
00:07:36.033   18:52:07 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:36.033   18:52:07 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:36.033  killing process with pid 74951
00:07:36.033   18:52:07 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74951'
00:07:36.033   18:52:07 event.app_repeat -- common/autotest_common.sh@973 -- # kill 74951
00:07:36.033   18:52:07 event.app_repeat -- common/autotest_common.sh@978 -- # wait 74951
00:07:36.033  spdk_app_start is called in Round 0.
00:07:36.033  Shutdown signal received, stop current app iteration
00:07:36.033  Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization...
00:07:36.033  spdk_app_start is called in Round 1.
00:07:36.033  Shutdown signal received, stop current app iteration
00:07:36.033  Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization...
00:07:36.033  spdk_app_start is called in Round 2.
00:07:36.033  Shutdown signal received, stop current app iteration
00:07:36.033  Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization...
00:07:36.033  spdk_app_start is called in Round 3.
00:07:36.033  Shutdown signal received, stop current app iteration
00:07:36.033   18:52:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT
00:07:36.033   18:52:07 event.app_repeat -- event/event.sh@42 -- # return 0
00:07:36.033  
00:07:36.033  real	0m18.886s
00:07:36.033  user	0m43.150s
00:07:36.033  sys	0m3.011s
00:07:36.033   18:52:07 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:36.033   18:52:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:36.033  ************************************
00:07:36.033  END TEST app_repeat
00:07:36.033  ************************************
00:07:36.033   18:52:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 ))
00:07:36.033   18:52:07 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:07:36.033   18:52:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:36.033   18:52:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:36.033   18:52:07 event -- common/autotest_common.sh@10 -- # set +x
00:07:36.033  ************************************
00:07:36.033  START TEST cpu_locks
00:07:36.033  ************************************
00:07:36.033   18:52:07 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:07:36.033  * Looking for test storage...
00:07:36.033  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:07:36.033    18:52:07 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:36.033     18:52:07 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version
00:07:36.033     18:52:07 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:36.291    18:52:07 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:36.291    18:52:07 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:36.291    18:52:07 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:36.291    18:52:07 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:36.291    18:52:07 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-:
00:07:36.291    18:52:07 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1
00:07:36.291    18:52:07 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-:
00:07:36.291    18:52:07 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2
00:07:36.291    18:52:07 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<'
00:07:36.291    18:52:07 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2
00:07:36.291    18:52:07 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1
00:07:36.291    18:52:07 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:36.291    18:52:07 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in
00:07:36.291    18:52:07 event.cpu_locks -- scripts/common.sh@345 -- # : 1
00:07:36.291    18:52:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:36.291    18:52:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:36.291     18:52:07 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1
00:07:36.291     18:52:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=1
00:07:36.291     18:52:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:36.291     18:52:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 1
00:07:36.292    18:52:07 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1
00:07:36.292     18:52:07 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2
00:07:36.292     18:52:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=2
00:07:36.292     18:52:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:36.292     18:52:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 2
00:07:36.292    18:52:07 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2
00:07:36.292    18:52:07 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:36.292    18:52:07 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:36.292    18:52:07 event.cpu_locks -- scripts/common.sh@368 -- # return 0
00:07:36.292    18:52:07 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:36.292    18:52:07 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:36.292  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:36.292  		--rc genhtml_branch_coverage=1
00:07:36.292  		--rc genhtml_function_coverage=1
00:07:36.292  		--rc genhtml_legend=1
00:07:36.292  		--rc geninfo_all_blocks=1
00:07:36.292  		--rc geninfo_unexecuted_blocks=1
00:07:36.292  		
00:07:36.292  		'
00:07:36.292    18:52:07 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:36.292  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:36.292  		--rc genhtml_branch_coverage=1
00:07:36.292  		--rc genhtml_function_coverage=1
00:07:36.292  		--rc genhtml_legend=1
00:07:36.292  		--rc geninfo_all_blocks=1
00:07:36.292  		--rc geninfo_unexecuted_blocks=1
00:07:36.292  		
00:07:36.292  		'
00:07:36.292    18:52:07 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:36.292  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:36.292  		--rc genhtml_branch_coverage=1
00:07:36.292  		--rc genhtml_function_coverage=1
00:07:36.292  		--rc genhtml_legend=1
00:07:36.292  		--rc geninfo_all_blocks=1
00:07:36.292  		--rc geninfo_unexecuted_blocks=1
00:07:36.292  		
00:07:36.292  		'
00:07:36.292    18:52:07 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:36.292  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:36.292  		--rc genhtml_branch_coverage=1
00:07:36.292  		--rc genhtml_function_coverage=1
00:07:36.292  		--rc genhtml_legend=1
00:07:36.292  		--rc geninfo_all_blocks=1
00:07:36.292  		--rc geninfo_unexecuted_blocks=1
00:07:36.292  		
00:07:36.292  		'
00:07:36.292   18:52:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock
00:07:36.292   18:52:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock
00:07:36.292   18:52:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT
00:07:36.292   18:52:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks
00:07:36.292   18:52:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:36.292   18:52:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:36.292   18:52:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:36.292  ************************************
00:07:36.292  START TEST default_locks
00:07:36.292  ************************************
00:07:36.292   18:52:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks
00:07:36.292   18:52:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=75572
00:07:36.292   18:52:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 75572
00:07:36.292   18:52:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 75572 ']'
00:07:36.292   18:52:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:07:36.292   18:52:07 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:36.292   18:52:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:36.292  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:36.292   18:52:07 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:36.292   18:52:07 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:36.292   18:52:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:07:36.292  [2024-12-13 18:52:07.991033] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:36.292  [2024-12-13 18:52:07.991164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75572 ]
00:07:36.551  [2024-12-13 18:52:08.139857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:36.551  [2024-12-13 18:52:08.172621] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:37.120   18:52:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:37.120   18:52:08 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0
00:07:37.120   18:52:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 75572
00:07:37.120   18:52:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 75572
00:07:37.120   18:52:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:37.688   18:52:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 75572
00:07:37.688   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 75572 ']'
00:07:37.688   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 75572
00:07:37.688    18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname
00:07:37.688   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:37.688    18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75572
00:07:37.688   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:37.688   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:37.688  killing process with pid 75572
00:07:37.688   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75572'
00:07:37.688   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 75572
00:07:37.688   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 75572
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 75572
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 75572
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:37.948    18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 75572
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 75572 ']'
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:37.948  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:07:37.948  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (75572) - No such process
00:07:37.948  ERROR: process (pid: 75572) is no longer running
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=()
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:07:37.948  
00:07:37.948  real	0m1.745s
00:07:37.948  user	0m1.821s
00:07:37.948  sys	0m0.557s
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:37.948  ************************************
00:07:37.948  END TEST default_locks
00:07:37.948  ************************************
00:07:37.948   18:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:07:37.948   18:52:09 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc
00:07:37.948   18:52:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:37.948   18:52:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:37.948   18:52:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:37.948  ************************************
00:07:37.948  START TEST default_locks_via_rpc
00:07:37.948  ************************************
00:07:37.948   18:52:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc
00:07:37.948   18:52:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=75636
00:07:37.948   18:52:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 75636
00:07:37.948   18:52:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 75636 ']'
00:07:37.948   18:52:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:07:37.948   18:52:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:37.948   18:52:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:37.948  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:37.948   18:52:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:37.948   18:52:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:37.948   18:52:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:38.207  [2024-12-13 18:52:09.771055] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:38.207  [2024-12-13 18:52:09.771170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75636 ]
00:07:38.207  [2024-12-13 18:52:09.912889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:38.207  [2024-12-13 18:52:09.946626] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:38.466   18:52:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:38.466   18:52:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:07:38.466   18:52:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks
00:07:38.466   18:52:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:38.466   18:52:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:38.466   18:52:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:38.466   18:52:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks
00:07:38.466   18:52:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=()
00:07:38.466   18:52:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files
00:07:38.466   18:52:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:07:38.466   18:52:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks
00:07:38.466   18:52:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:38.466   18:52:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:38.466   18:52:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:38.466   18:52:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 75636
00:07:38.466   18:52:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 75636
00:07:38.466   18:52:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:38.725   18:52:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 75636
00:07:38.725   18:52:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 75636 ']'
00:07:38.725   18:52:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 75636
00:07:38.725    18:52:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname
00:07:38.725   18:52:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:38.725    18:52:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75636
00:07:38.725   18:52:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:38.725   18:52:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:38.725  killing process with pid 75636
00:07:38.725   18:52:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75636'
00:07:38.725   18:52:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 75636
00:07:38.725   18:52:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 75636
00:07:39.293  
00:07:39.293  real	0m1.148s
00:07:39.293  user	0m1.084s
00:07:39.293  sys	0m0.450s
00:07:39.293   18:52:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:39.293   18:52:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:39.293  ************************************
00:07:39.294  END TEST default_locks_via_rpc
00:07:39.294  ************************************
00:07:39.294   18:52:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask
00:07:39.294   18:52:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:39.294   18:52:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:39.294   18:52:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:39.294  ************************************
00:07:39.294  START TEST non_locking_app_on_locked_coremask
00:07:39.294  ************************************
00:07:39.294   18:52:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask
00:07:39.294   18:52:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=75687
00:07:39.294   18:52:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:07:39.294   18:52:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 75687 /var/tmp/spdk.sock
00:07:39.294   18:52:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 75687 ']'
00:07:39.294   18:52:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:39.294   18:52:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:39.294  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:39.294   18:52:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:39.294   18:52:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:39.294   18:52:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:39.294  [2024-12-13 18:52:10.961515] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:39.294  [2024-12-13 18:52:10.961638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75687 ]
00:07:39.294  [2024-12-13 18:52:11.094678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:39.553  [2024-12-13 18:52:11.133048] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:39.812   18:52:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:39.812   18:52:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:07:39.812   18:52:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=75707
00:07:39.812   18:52:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock
00:07:39.812   18:52:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 75707 /var/tmp/spdk2.sock
00:07:39.812   18:52:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 75707 ']'
00:07:39.812   18:52:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:39.812   18:52:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:39.812   18:52:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:39.812  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:39.812   18:52:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:39.812   18:52:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:39.812  [2024-12-13 18:52:11.441315] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:39.812  [2024-12-13 18:52:11.441409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75707 ]
00:07:39.812  [2024-12-13 18:52:11.590539] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:07:39.812  [2024-12-13 18:52:11.590591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:40.070  [2024-12-13 18:52:11.660942] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:40.638   18:52:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:40.638   18:52:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:07:40.638   18:52:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 75687
00:07:40.638   18:52:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75687
00:07:40.638   18:52:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:41.574   18:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 75687
00:07:41.574   18:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 75687 ']'
00:07:41.574   18:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 75687
00:07:41.574    18:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:07:41.574   18:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:41.574    18:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75687
00:07:41.574   18:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:41.575   18:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:41.575  killing process with pid 75687
00:07:41.575   18:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75687'
00:07:41.575   18:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 75687
00:07:41.575   18:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 75687
00:07:42.142   18:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 75707
00:07:42.142   18:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 75707 ']'
00:07:42.142   18:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 75707
00:07:42.142    18:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:07:42.142   18:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:42.142    18:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75707
00:07:42.142   18:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:42.142   18:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:42.142  killing process with pid 75707
00:07:42.142   18:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75707'
00:07:42.142   18:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 75707
00:07:42.142   18:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 75707
00:07:42.709  
00:07:42.709  real	0m3.397s
00:07:42.709  user	0m3.673s
00:07:42.709  sys	0m1.076s
00:07:42.709   18:52:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:42.709   18:52:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:42.709  ************************************
00:07:42.709  END TEST non_locking_app_on_locked_coremask
00:07:42.709  ************************************
00:07:42.709   18:52:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask
00:07:42.709   18:52:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:42.709   18:52:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:42.709   18:52:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:42.709  ************************************
00:07:42.709  START TEST locking_app_on_unlocked_coremask
00:07:42.709  ************************************
00:07:42.709   18:52:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask
00:07:42.709   18:52:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=75786
00:07:42.709   18:52:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 75786 /var/tmp/spdk.sock
00:07:42.709   18:52:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks
00:07:42.709   18:52:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 75786 ']'
00:07:42.709   18:52:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:42.709   18:52:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:42.709  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:42.709   18:52:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:42.710   18:52:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:42.710   18:52:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:42.710  [2024-12-13 18:52:14.404675] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:42.710  [2024-12-13 18:52:14.404801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75786 ]
00:07:42.968  [2024-12-13 18:52:14.542259] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:07:42.968  [2024-12-13 18:52:14.542310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:42.968  [2024-12-13 18:52:14.575457] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:43.228   18:52:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:43.228   18:52:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:07:43.228   18:52:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=75795
00:07:43.228   18:52:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 75795 /var/tmp/spdk2.sock
00:07:43.228   18:52:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:07:43.228   18:52:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 75795 ']'
00:07:43.228   18:52:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:43.228   18:52:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:43.228  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:43.228   18:52:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:43.228   18:52:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:43.228   18:52:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:43.228  [2024-12-13 18:52:14.908813] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:43.228  [2024-12-13 18:52:14.908930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75795 ]
00:07:43.494  [2024-12-13 18:52:15.062995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:43.494  [2024-12-13 18:52:15.148431] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:44.429   18:52:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:44.429   18:52:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:07:44.429   18:52:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 75795
00:07:44.429   18:52:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:44.429   18:52:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75795
00:07:44.997   18:52:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 75786
00:07:44.997   18:52:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 75786 ']'
00:07:44.997   18:52:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 75786
00:07:44.997    18:52:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:07:44.997   18:52:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:44.997    18:52:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75786
00:07:45.256  killing process with pid 75786
00:07:45.256   18:52:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:45.256   18:52:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:45.256   18:52:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75786'
00:07:45.256   18:52:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 75786
00:07:45.256   18:52:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 75786
00:07:45.824   18:52:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 75795
00:07:45.824   18:52:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 75795 ']'
00:07:45.824   18:52:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 75795
00:07:45.824    18:52:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:07:45.824   18:52:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:45.824    18:52:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75795
00:07:45.824  killing process with pid 75795
00:07:45.824   18:52:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:45.824   18:52:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:45.824   18:52:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75795'
00:07:45.824   18:52:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 75795
00:07:45.824   18:52:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 75795
00:07:46.392  ************************************
00:07:46.392  END TEST locking_app_on_unlocked_coremask
00:07:46.392  ************************************
00:07:46.392  
00:07:46.392  real	0m3.563s
00:07:46.392  user	0m3.943s
00:07:46.392  sys	0m1.083s
00:07:46.392   18:52:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:46.392   18:52:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:46.392   18:52:17 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask
00:07:46.392   18:52:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:46.392   18:52:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:46.392   18:52:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:46.392  ************************************
00:07:46.392  START TEST locking_app_on_locked_coremask
00:07:46.392  ************************************
00:07:46.392   18:52:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask
00:07:46.392   18:52:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=75876
00:07:46.392   18:52:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 75876 /var/tmp/spdk.sock
00:07:46.392   18:52:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:07:46.392   18:52:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 75876 ']'
00:07:46.392   18:52:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:46.392   18:52:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:46.392  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:46.392   18:52:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:46.392   18:52:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:46.392   18:52:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:46.392  [2024-12-13 18:52:18.041677] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:46.392  [2024-12-13 18:52:18.041783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75876 ]
00:07:46.392  [2024-12-13 18:52:18.189145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:46.651  [2024-12-13 18:52:18.220543] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:46.920   18:52:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:46.920   18:52:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:07:46.920   18:52:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=75895
00:07:46.920   18:52:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:07:46.920   18:52:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 75895 /var/tmp/spdk2.sock
00:07:46.920   18:52:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0
00:07:46.920   18:52:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 75895 /var/tmp/spdk2.sock
00:07:46.920   18:52:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:07:46.920   18:52:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:46.920    18:52:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:07:46.920  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:46.920   18:52:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:46.920   18:52:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 75895 /var/tmp/spdk2.sock
00:07:46.920   18:52:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 75895 ']'
00:07:46.920   18:52:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:46.921   18:52:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:46.921   18:52:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:46.921   18:52:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:46.921   18:52:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:46.921  [2024-12-13 18:52:18.559149] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:46.921  [2024-12-13 18:52:18.559275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75895 ]
00:07:46.921  [2024-12-13 18:52:18.722796] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 75876 has claimed it.
00:07:46.921  [2024-12-13 18:52:18.722837] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:07:47.503  ERROR: process (pid: 75895) is no longer running
00:07:47.503  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (75895) - No such process
00:07:47.503   18:52:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:47.503   18:52:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1
00:07:47.503   18:52:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1
00:07:47.504   18:52:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:07:47.504   18:52:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:07:47.504   18:52:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:07:47.504   18:52:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 75876
00:07:47.504   18:52:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75876
00:07:47.504   18:52:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:48.071   18:52:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 75876
00:07:48.071   18:52:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 75876 ']'
00:07:48.071   18:52:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 75876
00:07:48.071    18:52:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:07:48.071   18:52:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:48.071    18:52:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75876
00:07:48.071  killing process with pid 75876
00:07:48.071   18:52:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:48.071   18:52:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:48.071   18:52:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75876'
00:07:48.071   18:52:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 75876
00:07:48.071   18:52:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 75876
00:07:48.330  ************************************
00:07:48.330  END TEST locking_app_on_locked_coremask
00:07:48.330  ************************************
00:07:48.330  
00:07:48.330  real	0m2.094s
00:07:48.330  user	0m2.317s
00:07:48.330  sys	0m0.623s
00:07:48.330   18:52:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:48.330   18:52:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:48.330   18:52:20 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask
00:07:48.330   18:52:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:48.330   18:52:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:48.330   18:52:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:48.330  ************************************
00:07:48.330  START TEST locking_overlapped_coremask
00:07:48.330  ************************************
00:07:48.330   18:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask
00:07:48.330   18:52:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=75942
00:07:48.330   18:52:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 75942 /var/tmp/spdk.sock
00:07:48.330   18:52:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7
00:07:48.330   18:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 75942 ']'
00:07:48.330   18:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:48.330   18:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:48.330  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:48.330   18:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:48.330   18:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:48.330   18:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:48.589  [2024-12-13 18:52:20.167996] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:48.589  [2024-12-13 18:52:20.168110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75942 ]
00:07:48.589  [2024-12-13 18:52:20.307030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:07:48.589  [2024-12-13 18:52:20.348803] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:07:48.589  [2024-12-13 18:52:20.348934] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:07:48.589  [2024-12-13 18:52:20.348952] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:48.848   18:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:48.848   18:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0
00:07:48.848   18:52:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=75964
00:07:48.848   18:52:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock
00:07:48.848   18:52:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 75964 /var/tmp/spdk2.sock
00:07:48.848   18:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0
00:07:48.848   18:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 75964 /var/tmp/spdk2.sock
00:07:48.848   18:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:07:48.848   18:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:48.848    18:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:07:48.848   18:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:48.848   18:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 75964 /var/tmp/spdk2.sock
00:07:48.849   18:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 75964 ']'
00:07:48.849   18:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:48.849   18:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:48.849  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:48.849   18:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:48.849   18:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:48.849   18:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:49.107  [2024-12-13 18:52:20.682037] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:49.107  [2024-12-13 18:52:20.682156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75964 ]
00:07:49.107  [2024-12-13 18:52:20.840544] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 75942 has claimed it.
00:07:49.107  [2024-12-13 18:52:20.840623] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:07:49.674  ERROR: process (pid: 75964) is no longer running
00:07:49.674  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (75964) - No such process
00:07:49.674   18:52:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:49.674   18:52:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1
00:07:49.674   18:52:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1
00:07:49.674   18:52:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:07:49.674   18:52:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:07:49.674   18:52:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:07:49.674   18:52:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks
00:07:49.674   18:52:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:07:49.674   18:52:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:07:49.674   18:52:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:07:49.674   18:52:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 75942
00:07:49.674   18:52:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 75942 ']'
00:07:49.674   18:52:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 75942
00:07:49.674    18:52:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname
00:07:49.675   18:52:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:49.675    18:52:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75942
00:07:49.675   18:52:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:49.675   18:52:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:49.675  killing process with pid 75942
00:07:49.675   18:52:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75942'
00:07:49.675   18:52:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 75942
00:07:49.675   18:52:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 75942
00:07:50.242  
00:07:50.242  real	0m1.694s
00:07:50.242  user	0m4.697s
00:07:50.242  sys	0m0.407s
00:07:50.242   18:52:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:50.242   18:52:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:50.242  ************************************
00:07:50.242  END TEST locking_overlapped_coremask
00:07:50.242  ************************************
00:07:50.242   18:52:21 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc
00:07:50.242   18:52:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:50.242   18:52:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:50.242   18:52:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:50.242  ************************************
00:07:50.242  START TEST locking_overlapped_coremask_via_rpc
00:07:50.242  ************************************
00:07:50.242   18:52:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc
00:07:50.242   18:52:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=76010
00:07:50.242   18:52:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 76010 /var/tmp/spdk.sock
00:07:50.242   18:52:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks
00:07:50.242   18:52:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 76010 ']'
00:07:50.242   18:52:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:50.242   18:52:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:50.242  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:50.242   18:52:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:50.242   18:52:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:50.242   18:52:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:50.242  [2024-12-13 18:52:21.917382] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:50.242  [2024-12-13 18:52:21.917479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76010 ]
00:07:50.242  [2024-12-13 18:52:22.054665] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:07:50.242  [2024-12-13 18:52:22.054723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:07:50.501  [2024-12-13 18:52:22.090792] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:07:50.501  [2024-12-13 18:52:22.090951] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:07:50.501  [2024-12-13 18:52:22.090957] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:50.760   18:52:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:50.760   18:52:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:07:50.760   18:52:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=76021
00:07:50.760   18:52:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks
00:07:50.760   18:52:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 76021 /var/tmp/spdk2.sock
00:07:50.760   18:52:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 76021 ']'
00:07:50.760   18:52:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:50.760   18:52:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:50.760  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:50.760   18:52:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:50.760   18:52:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:50.760   18:52:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:50.760  [2024-12-13 18:52:22.423631] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:50.760  [2024-12-13 18:52:22.423754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76021 ]
00:07:51.019  [2024-12-13 18:52:22.585339] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:07:51.019  [2024-12-13 18:52:22.585374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:07:51.019  [2024-12-13 18:52:22.673596] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:07:51.019  [2024-12-13 18:52:22.676383] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:07:51.019  [2024-12-13 18:52:22.676384] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4
00:07:51.587   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:51.587   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:07:51.587   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks
00:07:51.587   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:51.587   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:51.847   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:51.847   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:07:51.847   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0
00:07:51.847   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:07:51.847   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:07:51.847   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:51.847    18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:07:51.847   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:51.847   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:07:51.847   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:51.847   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:51.847  [2024-12-13 18:52:23.422398] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 76010 has claimed it.
00:07:51.847  2024/12/13 18:52:23 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2
00:07:51.847  request:
00:07:51.847  {
00:07:51.847  "method": "framework_enable_cpumask_locks",
00:07:51.847  "params": {}
00:07:51.847  }
00:07:51.847  Got JSON-RPC error response
00:07:51.847  GoRPCClient: error on JSON-RPC call
00:07:51.847   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:07:51.847   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1
00:07:51.847   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:07:51.847   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:07:51.847   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:07:51.847   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 76010 /var/tmp/spdk.sock
00:07:51.847   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 76010 ']'
00:07:51.847   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:51.847   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:51.847   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:51.847  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:51.847   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:51.847   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:52.106   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:52.106   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:07:52.106   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 76021 /var/tmp/spdk2.sock
00:07:52.106   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 76021 ']'
00:07:52.106   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:52.106   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:52.106  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:52.106   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:52.106   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:52.106   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:52.366   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:52.366   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:07:52.366   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks
00:07:52.366   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:07:52.366   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:07:52.366   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:07:52.366  
00:07:52.366  real	0m2.120s
00:07:52.366  user	0m1.188s
00:07:52.366  sys	0m0.185s
00:07:52.366   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:52.366   18:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:52.366  ************************************
00:07:52.366  END TEST locking_overlapped_coremask_via_rpc
00:07:52.366  ************************************
00:07:52.366   18:52:24 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup
00:07:52.366   18:52:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 76010 ]]
00:07:52.366   18:52:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 76010
00:07:52.366   18:52:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 76010 ']'
00:07:52.366   18:52:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 76010
00:07:52.366    18:52:24 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:07:52.366   18:52:24 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:52.366    18:52:24 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76010
00:07:52.366   18:52:24 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:52.366   18:52:24 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:52.366   18:52:24 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76010'
00:07:52.366  killing process with pid 76010
00:07:52.366   18:52:24 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 76010
00:07:52.366   18:52:24 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 76010
00:07:52.625   18:52:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 76021 ]]
00:07:52.625   18:52:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 76021
00:07:52.625   18:52:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 76021 ']'
00:07:52.625   18:52:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 76021
00:07:52.625    18:52:24 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:07:52.625   18:52:24 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:52.625    18:52:24 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76021
00:07:52.883   18:52:24 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:07:52.884   18:52:24 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:07:52.884  killing process with pid 76021
00:07:52.884   18:52:24 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76021'
00:07:52.884   18:52:24 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 76021
00:07:52.884   18:52:24 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 76021
00:07:53.143   18:52:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:07:53.143   18:52:24 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup
00:07:53.143   18:52:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 76010 ]]
00:07:53.143   18:52:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 76010
00:07:53.143   18:52:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 76010 ']'
00:07:53.143   18:52:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 76010
00:07:53.143  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76010) - No such process
00:07:53.143  Process with pid 76010 is not found
00:07:53.143   18:52:24 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 76010 is not found'
00:07:53.143   18:52:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 76021 ]]
00:07:53.143   18:52:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 76021
00:07:53.143   18:52:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 76021 ']'
00:07:53.143   18:52:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 76021
00:07:53.143  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76021) - No such process
00:07:53.143  Process with pid 76021 is not found
00:07:53.143   18:52:24 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 76021 is not found'
00:07:53.143   18:52:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:07:53.143  
00:07:53.143  real	0m17.096s
00:07:53.143  user	0m30.012s
00:07:53.143  sys	0m5.258s
00:07:53.143   18:52:24 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:53.143   18:52:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:53.143  ************************************
00:07:53.143  END TEST cpu_locks
00:07:53.143  ************************************
00:07:53.143  
00:07:53.143  real	0m44.057s
00:07:53.143  user	1m25.279s
00:07:53.143  sys	0m9.039s
00:07:53.143   18:52:24 event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:53.143   18:52:24 event -- common/autotest_common.sh@10 -- # set +x
00:07:53.143  ************************************
00:07:53.143  END TEST event
00:07:53.143  ************************************
00:07:53.143   18:52:24  -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:07:53.143   18:52:24  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:53.143   18:52:24  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:53.143   18:52:24  -- common/autotest_common.sh@10 -- # set +x
00:07:53.143  ************************************
00:07:53.143  START TEST thread
00:07:53.143  ************************************
00:07:53.143   18:52:24 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:07:53.143  * Looking for test storage...
00:07:53.402  * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread
00:07:53.402    18:52:24 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:53.402     18:52:24 thread -- common/autotest_common.sh@1711 -- # lcov --version
00:07:53.402     18:52:24 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:53.402    18:52:25 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:53.402    18:52:25 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:53.402    18:52:25 thread -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:53.402    18:52:25 thread -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:53.402    18:52:25 thread -- scripts/common.sh@336 -- # IFS=.-:
00:07:53.402    18:52:25 thread -- scripts/common.sh@336 -- # read -ra ver1
00:07:53.402    18:52:25 thread -- scripts/common.sh@337 -- # IFS=.-:
00:07:53.402    18:52:25 thread -- scripts/common.sh@337 -- # read -ra ver2
00:07:53.402    18:52:25 thread -- scripts/common.sh@338 -- # local 'op=<'
00:07:53.402    18:52:25 thread -- scripts/common.sh@340 -- # ver1_l=2
00:07:53.402    18:52:25 thread -- scripts/common.sh@341 -- # ver2_l=1
00:07:53.402    18:52:25 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:53.402    18:52:25 thread -- scripts/common.sh@344 -- # case "$op" in
00:07:53.402    18:52:25 thread -- scripts/common.sh@345 -- # : 1
00:07:53.402    18:52:25 thread -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:53.402    18:52:25 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:53.402     18:52:25 thread -- scripts/common.sh@365 -- # decimal 1
00:07:53.402     18:52:25 thread -- scripts/common.sh@353 -- # local d=1
00:07:53.402     18:52:25 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:53.402     18:52:25 thread -- scripts/common.sh@355 -- # echo 1
00:07:53.402    18:52:25 thread -- scripts/common.sh@365 -- # ver1[v]=1
00:07:53.402     18:52:25 thread -- scripts/common.sh@366 -- # decimal 2
00:07:53.402     18:52:25 thread -- scripts/common.sh@353 -- # local d=2
00:07:53.402     18:52:25 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:53.402     18:52:25 thread -- scripts/common.sh@355 -- # echo 2
00:07:53.402    18:52:25 thread -- scripts/common.sh@366 -- # ver2[v]=2
00:07:53.402    18:52:25 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:53.402    18:52:25 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:53.402    18:52:25 thread -- scripts/common.sh@368 -- # return 0
00:07:53.402    18:52:25 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:53.402    18:52:25 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:53.402  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:53.402  		--rc genhtml_branch_coverage=1
00:07:53.402  		--rc genhtml_function_coverage=1
00:07:53.402  		--rc genhtml_legend=1
00:07:53.402  		--rc geninfo_all_blocks=1
00:07:53.402  		--rc geninfo_unexecuted_blocks=1
00:07:53.402  		
00:07:53.402  		'
00:07:53.402    18:52:25 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:53.402  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:53.402  		--rc genhtml_branch_coverage=1
00:07:53.402  		--rc genhtml_function_coverage=1
00:07:53.402  		--rc genhtml_legend=1
00:07:53.402  		--rc geninfo_all_blocks=1
00:07:53.402  		--rc geninfo_unexecuted_blocks=1
00:07:53.402  		
00:07:53.402  		'
00:07:53.402    18:52:25 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:53.402  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:53.402  		--rc genhtml_branch_coverage=1
00:07:53.402  		--rc genhtml_function_coverage=1
00:07:53.402  		--rc genhtml_legend=1
00:07:53.402  		--rc geninfo_all_blocks=1
00:07:53.402  		--rc geninfo_unexecuted_blocks=1
00:07:53.402  		
00:07:53.402  		'
00:07:53.402    18:52:25 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:53.402  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:53.402  		--rc genhtml_branch_coverage=1
00:07:53.402  		--rc genhtml_function_coverage=1
00:07:53.402  		--rc genhtml_legend=1
00:07:53.402  		--rc geninfo_all_blocks=1
00:07:53.402  		--rc geninfo_unexecuted_blocks=1
00:07:53.402  		
00:07:53.402  		'
00:07:53.402   18:52:25 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:07:53.402   18:52:25 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:07:53.402   18:52:25 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:53.402   18:52:25 thread -- common/autotest_common.sh@10 -- # set +x
00:07:53.402  ************************************
00:07:53.402  START TEST thread_poller_perf
00:07:53.402  ************************************
00:07:53.403   18:52:25 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:07:53.403  [2024-12-13 18:52:25.098838] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:53.403  [2024-12-13 18:52:25.098933] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76180 ]
00:07:53.662  [2024-12-13 18:52:25.244685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:53.662  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:07:53.662  [2024-12-13 18:52:25.275666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:54.598  
[2024-12-13T18:52:26.422Z]  ======================================
00:07:54.598  
[2024-12-13T18:52:26.422Z]  busy:2206559886 (cyc)
00:07:54.598  
[2024-12-13T18:52:26.422Z]  total_run_count: 403000
00:07:54.598  
[2024-12-13T18:52:26.422Z]  tsc_hz: 2200000000 (cyc)
00:07:54.598  
[2024-12-13T18:52:26.422Z]  ======================================
00:07:54.598  
[2024-12-13T18:52:26.422Z]  poller_cost: 5475 (cyc), 2488 (nsec)
00:07:54.598  
00:07:54.598  real	0m1.234s
00:07:54.598  user	0m1.085s
00:07:54.598  sys	0m0.044s
00:07:54.598   18:52:26 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:54.598   18:52:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:07:54.598  ************************************
00:07:54.598  END TEST thread_poller_perf
00:07:54.598  ************************************
00:07:54.598   18:52:26 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:07:54.598   18:52:26 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:07:54.598   18:52:26 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:54.598   18:52:26 thread -- common/autotest_common.sh@10 -- # set +x
00:07:54.598  ************************************
00:07:54.598  START TEST thread_poller_perf
00:07:54.598  ************************************
00:07:54.598   18:52:26 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:07:54.598  [2024-12-13 18:52:26.382835] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:54.598  [2024-12-13 18:52:26.382930] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76210 ]
00:07:54.856  [2024-12-13 18:52:26.532512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:54.856  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:07:54.856  [2024-12-13 18:52:26.569026] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:55.792  
[2024-12-13T18:52:27.616Z]  ======================================
00:07:55.792  
[2024-12-13T18:52:27.616Z]  busy:2202504826 (cyc)
00:07:55.792  
[2024-12-13T18:52:27.616Z]  total_run_count: 4766000
00:07:55.792  
[2024-12-13T18:52:27.616Z]  tsc_hz: 2200000000 (cyc)
00:07:55.792  
[2024-12-13T18:52:27.616Z]  ======================================
00:07:55.792  
[2024-12-13T18:52:27.616Z]  poller_cost: 462 (cyc), 210 (nsec)
00:07:55.792  
00:07:55.792  real	0m1.239s
00:07:55.792  user	0m1.088s
00:07:55.792  sys	0m0.044s
00:07:55.792  ************************************
00:07:55.792  END TEST thread_poller_perf
00:07:55.792  ************************************
00:07:55.792   18:52:27 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:55.792   18:52:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:07:56.051   18:52:27 thread -- thread/thread.sh@17 -- # [[ y != \y ]]
00:07:56.051  ************************************
00:07:56.051  END TEST thread
00:07:56.051  ************************************
00:07:56.051  
00:07:56.051  real	0m2.763s
00:07:56.051  user	0m2.316s
00:07:56.051  sys	0m0.235s
00:07:56.051   18:52:27 thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:56.051   18:52:27 thread -- common/autotest_common.sh@10 -- # set +x
00:07:56.051   18:52:27  -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]]
00:07:56.051   18:52:27  -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:07:56.051   18:52:27  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:56.051   18:52:27  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:56.051   18:52:27  -- common/autotest_common.sh@10 -- # set +x
00:07:56.051  ************************************
00:07:56.051  START TEST app_cmdline
00:07:56.051  ************************************
00:07:56.051   18:52:27 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:07:56.051  * Looking for test storage...
00:07:56.051  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:07:56.051    18:52:27 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:56.051     18:52:27 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version
00:07:56.051     18:52:27 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:56.309    18:52:27 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:56.309    18:52:27 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:56.309    18:52:27 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:56.309    18:52:27 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:56.309    18:52:27 app_cmdline -- scripts/common.sh@336 -- # IFS=.-:
00:07:56.309    18:52:27 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1
00:07:56.309    18:52:27 app_cmdline -- scripts/common.sh@337 -- # IFS=.-:
00:07:56.309    18:52:27 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2
00:07:56.309    18:52:27 app_cmdline -- scripts/common.sh@338 -- # local 'op=<'
00:07:56.309    18:52:27 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2
00:07:56.309    18:52:27 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1
00:07:56.309    18:52:27 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:56.309    18:52:27 app_cmdline -- scripts/common.sh@344 -- # case "$op" in
00:07:56.309    18:52:27 app_cmdline -- scripts/common.sh@345 -- # : 1
00:07:56.309    18:52:27 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:56.309    18:52:27 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:56.309     18:52:27 app_cmdline -- scripts/common.sh@365 -- # decimal 1
00:07:56.309     18:52:27 app_cmdline -- scripts/common.sh@353 -- # local d=1
00:07:56.309     18:52:27 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:56.309     18:52:27 app_cmdline -- scripts/common.sh@355 -- # echo 1
00:07:56.309    18:52:27 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1
00:07:56.309     18:52:27 app_cmdline -- scripts/common.sh@366 -- # decimal 2
00:07:56.309     18:52:27 app_cmdline -- scripts/common.sh@353 -- # local d=2
00:07:56.309     18:52:27 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:56.309     18:52:27 app_cmdline -- scripts/common.sh@355 -- # echo 2
00:07:56.309    18:52:27 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2
00:07:56.309    18:52:27 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:56.309    18:52:27 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:56.309    18:52:27 app_cmdline -- scripts/common.sh@368 -- # return 0
00:07:56.309    18:52:27 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:56.309    18:52:27 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:56.309  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:56.309  		--rc genhtml_branch_coverage=1
00:07:56.309  		--rc genhtml_function_coverage=1
00:07:56.309  		--rc genhtml_legend=1
00:07:56.309  		--rc geninfo_all_blocks=1
00:07:56.309  		--rc geninfo_unexecuted_blocks=1
00:07:56.309  		
00:07:56.309  		'
00:07:56.309    18:52:27 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:56.309  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:56.309  		--rc genhtml_branch_coverage=1
00:07:56.309  		--rc genhtml_function_coverage=1
00:07:56.309  		--rc genhtml_legend=1
00:07:56.309  		--rc geninfo_all_blocks=1
00:07:56.309  		--rc geninfo_unexecuted_blocks=1
00:07:56.309  		
00:07:56.309  		'
00:07:56.310    18:52:27 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:56.310  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:56.310  		--rc genhtml_branch_coverage=1
00:07:56.310  		--rc genhtml_function_coverage=1
00:07:56.310  		--rc genhtml_legend=1
00:07:56.310  		--rc geninfo_all_blocks=1
00:07:56.310  		--rc geninfo_unexecuted_blocks=1
00:07:56.310  		
00:07:56.310  		'
00:07:56.310  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:56.310    18:52:27 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:56.310  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:56.310  		--rc genhtml_branch_coverage=1
00:07:56.310  		--rc genhtml_function_coverage=1
00:07:56.310  		--rc genhtml_legend=1
00:07:56.310  		--rc geninfo_all_blocks=1
00:07:56.310  		--rc geninfo_unexecuted_blocks=1
00:07:56.310  		
00:07:56.310  		'
00:07:56.310   18:52:27 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:07:56.310   18:52:27 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=76298
00:07:56.310   18:52:27 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 76298
00:07:56.310   18:52:27 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:07:56.310   18:52:27 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 76298 ']'
00:07:56.310   18:52:27 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:56.310   18:52:27 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:56.310   18:52:27 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:56.310   18:52:27 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:56.310   18:52:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:07:56.310  [2024-12-13 18:52:27.958417] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:56.310  [2024-12-13 18:52:27.958699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76298 ]
00:07:56.310  [2024-12-13 18:52:28.105973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:56.568  [2024-12-13 18:52:28.137802] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:07:56.830   18:52:28 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:56.830   18:52:28 app_cmdline -- common/autotest_common.sh@868 -- # return 0
00:07:56.830   18:52:28 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version
00:07:57.089  {
00:07:57.089    "fields": {
00:07:57.089      "commit": "e01cb43b8",
00:07:57.089      "major": 25,
00:07:57.089      "minor": 1,
00:07:57.089      "patch": 0,
00:07:57.089      "suffix": "-pre"
00:07:57.089    },
00:07:57.089    "version": "SPDK v25.01-pre git sha1 e01cb43b8"
00:07:57.089  }
00:07:57.089   18:52:28 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=()
00:07:57.089   18:52:28 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:07:57.089   18:52:28 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:07:57.089   18:52:28 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:07:57.089    18:52:28 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:07:57.089    18:52:28 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]'
00:07:57.089    18:52:28 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:57.089    18:52:28 app_cmdline -- app/cmdline.sh@26 -- # sort
00:07:57.089    18:52:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:07:57.089    18:52:28 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:57.089   18:52:28 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:07:57.089   18:52:28 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:07:57.089   18:52:28 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:07:57.089   18:52:28 app_cmdline -- common/autotest_common.sh@652 -- # local es=0
00:07:57.089   18:52:28 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:07:57.089   18:52:28 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:07:57.089   18:52:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:57.089    18:52:28 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:07:57.089   18:52:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:57.089    18:52:28 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:07:57.089   18:52:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:57.089   18:52:28 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:07:57.089   18:52:28 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:07:57.089   18:52:28 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:07:57.348  2024/12/13 18:52:29 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found
00:07:57.348  request:
00:07:57.348  {
00:07:57.348    "method": "env_dpdk_get_mem_stats",
00:07:57.348    "params": {}
00:07:57.348  }
00:07:57.348  Got JSON-RPC error response
00:07:57.348  GoRPCClient: error on JSON-RPC call
00:07:57.348   18:52:29 app_cmdline -- common/autotest_common.sh@655 -- # es=1
00:07:57.348   18:52:29 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:07:57.348   18:52:29 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:07:57.348   18:52:29 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:07:57.348   18:52:29 app_cmdline -- app/cmdline.sh@1 -- # killprocess 76298
00:07:57.348   18:52:29 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 76298 ']'
00:07:57.348   18:52:29 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 76298
00:07:57.348    18:52:29 app_cmdline -- common/autotest_common.sh@959 -- # uname
00:07:57.348   18:52:29 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:57.348    18:52:29 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76298
00:07:57.348   18:52:29 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:57.348   18:52:29 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:57.348  killing process with pid 76298
00:07:57.348   18:52:29 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76298'
00:07:57.348   18:52:29 app_cmdline -- common/autotest_common.sh@973 -- # kill 76298
00:07:57.348   18:52:29 app_cmdline -- common/autotest_common.sh@978 -- # wait 76298
00:07:57.607  
00:07:57.607  real	0m1.714s
00:07:57.607  user	0m2.096s
00:07:57.607  sys	0m0.489s
00:07:57.607   18:52:29 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:57.607   18:52:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:07:57.607  ************************************
00:07:57.607  END TEST app_cmdline
00:07:57.607  ************************************
00:07:57.866   18:52:29  -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:07:57.866   18:52:29  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:57.866   18:52:29  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:57.866   18:52:29  -- common/autotest_common.sh@10 -- # set +x
00:07:57.866  ************************************
00:07:57.866  START TEST version
00:07:57.866  ************************************
00:07:57.866   18:52:29 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:07:57.866  * Looking for test storage...
00:07:57.866  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:07:57.866    18:52:29 version -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:57.866     18:52:29 version -- common/autotest_common.sh@1711 -- # lcov --version
00:07:57.866     18:52:29 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:57.866    18:52:29 version -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:57.866    18:52:29 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:57.866    18:52:29 version -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:57.866    18:52:29 version -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:57.867    18:52:29 version -- scripts/common.sh@336 -- # IFS=.-:
00:07:57.867    18:52:29 version -- scripts/common.sh@336 -- # read -ra ver1
00:07:57.867    18:52:29 version -- scripts/common.sh@337 -- # IFS=.-:
00:07:57.867    18:52:29 version -- scripts/common.sh@337 -- # read -ra ver2
00:07:57.867    18:52:29 version -- scripts/common.sh@338 -- # local 'op=<'
00:07:57.867    18:52:29 version -- scripts/common.sh@340 -- # ver1_l=2
00:07:57.867    18:52:29 version -- scripts/common.sh@341 -- # ver2_l=1
00:07:57.867    18:52:29 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:57.867    18:52:29 version -- scripts/common.sh@344 -- # case "$op" in
00:07:57.867    18:52:29 version -- scripts/common.sh@345 -- # : 1
00:07:57.867    18:52:29 version -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:57.867    18:52:29 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:57.867     18:52:29 version -- scripts/common.sh@365 -- # decimal 1
00:07:57.867     18:52:29 version -- scripts/common.sh@353 -- # local d=1
00:07:57.867     18:52:29 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:57.867     18:52:29 version -- scripts/common.sh@355 -- # echo 1
00:07:57.867    18:52:29 version -- scripts/common.sh@365 -- # ver1[v]=1
00:07:57.867     18:52:29 version -- scripts/common.sh@366 -- # decimal 2
00:07:57.867     18:52:29 version -- scripts/common.sh@353 -- # local d=2
00:07:57.867     18:52:29 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:57.867     18:52:29 version -- scripts/common.sh@355 -- # echo 2
00:07:57.867    18:52:29 version -- scripts/common.sh@366 -- # ver2[v]=2
00:07:57.867    18:52:29 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:57.867    18:52:29 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:57.867    18:52:29 version -- scripts/common.sh@368 -- # return 0
00:07:57.867    18:52:29 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:57.867    18:52:29 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:57.867  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:57.867  		--rc genhtml_branch_coverage=1
00:07:57.867  		--rc genhtml_function_coverage=1
00:07:57.867  		--rc genhtml_legend=1
00:07:57.867  		--rc geninfo_all_blocks=1
00:07:57.867  		--rc geninfo_unexecuted_blocks=1
00:07:57.867  		
00:07:57.867  		'
00:07:57.867    18:52:29 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:57.867  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:57.867  		--rc genhtml_branch_coverage=1
00:07:57.867  		--rc genhtml_function_coverage=1
00:07:57.867  		--rc genhtml_legend=1
00:07:57.867  		--rc geninfo_all_blocks=1
00:07:57.867  		--rc geninfo_unexecuted_blocks=1
00:07:57.867  		
00:07:57.867  		'
00:07:57.867    18:52:29 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:57.867  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:57.867  		--rc genhtml_branch_coverage=1
00:07:57.867  		--rc genhtml_function_coverage=1
00:07:57.867  		--rc genhtml_legend=1
00:07:57.867  		--rc geninfo_all_blocks=1
00:07:57.867  		--rc geninfo_unexecuted_blocks=1
00:07:57.867  		
00:07:57.867  		'
00:07:57.867    18:52:29 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:57.867  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:57.867  		--rc genhtml_branch_coverage=1
00:07:57.867  		--rc genhtml_function_coverage=1
00:07:57.867  		--rc genhtml_legend=1
00:07:57.867  		--rc geninfo_all_blocks=1
00:07:57.867  		--rc geninfo_unexecuted_blocks=1
00:07:57.867  		
00:07:57.867  		'
00:07:57.867    18:52:29 version -- app/version.sh@17 -- # get_header_version major
00:07:57.867    18:52:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:07:57.867    18:52:29 version -- app/version.sh@14 -- # cut -f2
00:07:57.867    18:52:29 version -- app/version.sh@14 -- # tr -d '"'
00:07:57.867   18:52:29 version -- app/version.sh@17 -- # major=25
00:07:57.867    18:52:29 version -- app/version.sh@18 -- # get_header_version minor
00:07:57.867    18:52:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:07:57.867    18:52:29 version -- app/version.sh@14 -- # cut -f2
00:07:57.867    18:52:29 version -- app/version.sh@14 -- # tr -d '"'
00:07:57.867   18:52:29 version -- app/version.sh@18 -- # minor=1
00:07:57.867    18:52:29 version -- app/version.sh@19 -- # get_header_version patch
00:07:57.867    18:52:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:07:57.867    18:52:29 version -- app/version.sh@14 -- # cut -f2
00:07:57.867    18:52:29 version -- app/version.sh@14 -- # tr -d '"'
00:07:57.867   18:52:29 version -- app/version.sh@19 -- # patch=0
00:07:57.867    18:52:29 version -- app/version.sh@20 -- # get_header_version suffix
00:07:57.867    18:52:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:07:57.867    18:52:29 version -- app/version.sh@14 -- # cut -f2
00:07:57.867    18:52:29 version -- app/version.sh@14 -- # tr -d '"'
00:07:57.867   18:52:29 version -- app/version.sh@20 -- # suffix=-pre
00:07:57.867   18:52:29 version -- app/version.sh@22 -- # version=25.1
00:07:57.867   18:52:29 version -- app/version.sh@25 -- # (( patch != 0 ))
00:07:57.867   18:52:29 version -- app/version.sh@28 -- # version=25.1rc0
00:07:57.867   18:52:29 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:07:57.867    18:52:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:07:58.126   18:52:29 version -- app/version.sh@30 -- # py_version=25.1rc0
00:07:58.126   18:52:29 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]]
00:07:58.126  
00:07:58.126  real	0m0.254s
00:07:58.126  user	0m0.173s
00:07:58.126  sys	0m0.121s
00:07:58.126   18:52:29 version -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:58.126   18:52:29 version -- common/autotest_common.sh@10 -- # set +x
00:07:58.126  ************************************
00:07:58.126  END TEST version
00:07:58.126  ************************************
00:07:58.126   18:52:29  -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']'
00:07:58.126   18:52:29  -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]]
00:07:58.126    18:52:29  -- spdk/autotest.sh@194 -- # uname -s
00:07:58.126   18:52:29  -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]]
00:07:58.126   18:52:29  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:07:58.126   18:52:29  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:07:58.126   18:52:29  -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']'
00:07:58.126   18:52:29  -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']'
00:07:58.126   18:52:29  -- spdk/autotest.sh@260 -- # timing_exit lib
00:07:58.126   18:52:29  -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:58.126   18:52:29  -- common/autotest_common.sh@10 -- # set +x
00:07:58.126   18:52:29  -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']'
00:07:58.126   18:52:29  -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']'
00:07:58.126   18:52:29  -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']'
00:07:58.126   18:52:29  -- spdk/autotest.sh@277 -- # export NET_TYPE
00:07:58.126   18:52:29  -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']'
00:07:58.126   18:52:29  -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']'
00:07:58.126   18:52:29  -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp
00:07:58.126   18:52:29  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:07:58.126   18:52:29  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:58.126   18:52:29  -- common/autotest_common.sh@10 -- # set +x
00:07:58.126  ************************************
00:07:58.126  START TEST nvmf_tcp
00:07:58.126  ************************************
00:07:58.126   18:52:29 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp
00:07:58.126  * Looking for test storage...
00:07:58.126  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf
00:07:58.126    18:52:29 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:58.126     18:52:29 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version
00:07:58.126     18:52:29 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:58.385    18:52:29 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:58.385    18:52:29 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:58.385    18:52:29 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:58.385    18:52:29 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:58.385    18:52:29 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:07:58.385    18:52:29 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:07:58.385    18:52:29 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:07:58.385    18:52:29 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:07:58.385    18:52:29 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:07:58.385    18:52:29 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:07:58.385    18:52:29 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:07:58.385    18:52:29 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:58.385    18:52:29 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in
00:07:58.385    18:52:29 nvmf_tcp -- scripts/common.sh@345 -- # : 1
00:07:58.385    18:52:29 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:58.385    18:52:29 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:58.385     18:52:29 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1
00:07:58.385     18:52:29 nvmf_tcp -- scripts/common.sh@353 -- # local d=1
00:07:58.385     18:52:29 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:58.385     18:52:29 nvmf_tcp -- scripts/common.sh@355 -- # echo 1
00:07:58.385    18:52:29 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:07:58.385     18:52:29 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2
00:07:58.385     18:52:29 nvmf_tcp -- scripts/common.sh@353 -- # local d=2
00:07:58.385     18:52:29 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:58.385     18:52:29 nvmf_tcp -- scripts/common.sh@355 -- # echo 2
00:07:58.385    18:52:29 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:07:58.385    18:52:29 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:58.385    18:52:29 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:58.385    18:52:29 nvmf_tcp -- scripts/common.sh@368 -- # return 0
00:07:58.385    18:52:29 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:58.385    18:52:29 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:58.385  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:58.385  		--rc genhtml_branch_coverage=1
00:07:58.385  		--rc genhtml_function_coverage=1
00:07:58.385  		--rc genhtml_legend=1
00:07:58.385  		--rc geninfo_all_blocks=1
00:07:58.385  		--rc geninfo_unexecuted_blocks=1
00:07:58.385  		
00:07:58.385  		'
00:07:58.385    18:52:29 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:58.385  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:58.385  		--rc genhtml_branch_coverage=1
00:07:58.385  		--rc genhtml_function_coverage=1
00:07:58.385  		--rc genhtml_legend=1
00:07:58.385  		--rc geninfo_all_blocks=1
00:07:58.385  		--rc geninfo_unexecuted_blocks=1
00:07:58.385  		
00:07:58.385  		'
00:07:58.385    18:52:29 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:58.385  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:58.385  		--rc genhtml_branch_coverage=1
00:07:58.385  		--rc genhtml_function_coverage=1
00:07:58.385  		--rc genhtml_legend=1
00:07:58.385  		--rc geninfo_all_blocks=1
00:07:58.385  		--rc geninfo_unexecuted_blocks=1
00:07:58.385  		
00:07:58.385  		'
00:07:58.385    18:52:29 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:58.385  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:58.385  		--rc genhtml_branch_coverage=1
00:07:58.385  		--rc genhtml_function_coverage=1
00:07:58.385  		--rc genhtml_legend=1
00:07:58.385  		--rc geninfo_all_blocks=1
00:07:58.385  		--rc geninfo_unexecuted_blocks=1
00:07:58.385  		
00:07:58.385  		'
00:07:58.385    18:52:29 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s
00:07:58.385   18:52:30 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']'
00:07:58.385   18:52:30 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp
00:07:58.385   18:52:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:07:58.385   18:52:30 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:58.385   18:52:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:07:58.385  ************************************
00:07:58.385  START TEST nvmf_target_core
00:07:58.385  ************************************
00:07:58.385   18:52:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp
00:07:58.385  * Looking for test storage...
00:07:58.385  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf
00:07:58.385    18:52:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:58.385     18:52:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version
00:07:58.385     18:52:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:58.385    18:52:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:58.385    18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:58.385    18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:58.385    18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:58.385    18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-:
00:07:58.385    18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1
00:07:58.385    18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-:
00:07:58.385    18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2
00:07:58.385    18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<'
00:07:58.385    18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2
00:07:58.385    18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1
00:07:58.385    18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:58.385    18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in
00:07:58.385    18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1
00:07:58.385    18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:58.385    18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:58.385     18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1
00:07:58.385     18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1
00:07:58.385     18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:58.385     18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1
00:07:58.385    18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1
00:07:58.385     18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2
00:07:58.385     18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2
00:07:58.385     18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:58.385     18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2
00:07:58.385    18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2
00:07:58.385    18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:58.385    18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:58.385    18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0
00:07:58.385    18:52:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:58.386    18:52:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:58.386  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:58.386  		--rc genhtml_branch_coverage=1
00:07:58.386  		--rc genhtml_function_coverage=1
00:07:58.386  		--rc genhtml_legend=1
00:07:58.386  		--rc geninfo_all_blocks=1
00:07:58.386  		--rc geninfo_unexecuted_blocks=1
00:07:58.386  		
00:07:58.386  		'
00:07:58.386    18:52:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:58.386  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:58.386  		--rc genhtml_branch_coverage=1
00:07:58.386  		--rc genhtml_function_coverage=1
00:07:58.386  		--rc genhtml_legend=1
00:07:58.386  		--rc geninfo_all_blocks=1
00:07:58.386  		--rc geninfo_unexecuted_blocks=1
00:07:58.386  		
00:07:58.386  		'
00:07:58.386    18:52:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:58.386  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:58.386  		--rc genhtml_branch_coverage=1
00:07:58.386  		--rc genhtml_function_coverage=1
00:07:58.386  		--rc genhtml_legend=1
00:07:58.386  		--rc geninfo_all_blocks=1
00:07:58.386  		--rc geninfo_unexecuted_blocks=1
00:07:58.386  		
00:07:58.386  		'
00:07:58.386    18:52:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:58.386  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:58.386  		--rc genhtml_branch_coverage=1
00:07:58.386  		--rc genhtml_function_coverage=1
00:07:58.386  		--rc genhtml_legend=1
00:07:58.386  		--rc geninfo_all_blocks=1
00:07:58.386  		--rc geninfo_unexecuted_blocks=1
00:07:58.386  		
00:07:58.386  		'
00:07:58.386    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s
00:07:58.386   18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']'
00:07:58.386   18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:07:58.646     18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:07:58.646     18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:07:58.646     18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob
00:07:58.646     18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:58.646     18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:58.646     18:52:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:58.646      18:52:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:58.646      18:52:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:58.646      18:52:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:58.646      18:52:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH
00:07:58.646      18:52:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:07:58.646  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0
00:07:58.646   18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT
00:07:58.646   18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@")
00:07:58.646   18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]]
00:07:58.646   18:52:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp
00:07:58.646   18:52:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:07:58.646   18:52:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:58.646   18:52:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:07:58.646  ************************************
00:07:58.646  START TEST nvmf_abort
00:07:58.646  ************************************
00:07:58.646   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp
00:07:58.646  * Looking for test storage...
00:07:58.646  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:58.646     18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version
00:07:58.646     18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-:
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-:
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<'
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:58.646     18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1
00:07:58.646     18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1
00:07:58.646     18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:58.646     18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1
00:07:58.646     18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2
00:07:58.646     18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2
00:07:58.646     18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:58.646     18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:58.646  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:58.646  		--rc genhtml_branch_coverage=1
00:07:58.646  		--rc genhtml_function_coverage=1
00:07:58.646  		--rc genhtml_legend=1
00:07:58.646  		--rc geninfo_all_blocks=1
00:07:58.646  		--rc geninfo_unexecuted_blocks=1
00:07:58.646  		
00:07:58.646  		'
00:07:58.646    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:58.646  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:58.646  		--rc genhtml_branch_coverage=1
00:07:58.646  		--rc genhtml_function_coverage=1
00:07:58.646  		--rc genhtml_legend=1
00:07:58.646  		--rc geninfo_all_blocks=1
00:07:58.646  		--rc geninfo_unexecuted_blocks=1
00:07:58.647  		
00:07:58.647  		'
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:58.647  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:58.647  		--rc genhtml_branch_coverage=1
00:07:58.647  		--rc genhtml_function_coverage=1
00:07:58.647  		--rc genhtml_legend=1
00:07:58.647  		--rc geninfo_all_blocks=1
00:07:58.647  		--rc geninfo_unexecuted_blocks=1
00:07:58.647  		
00:07:58.647  		'
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:58.647  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:58.647  		--rc genhtml_branch_coverage=1
00:07:58.647  		--rc genhtml_function_coverage=1
00:07:58.647  		--rc genhtml_legend=1
00:07:58.647  		--rc geninfo_all_blocks=1
00:07:58.647  		--rc geninfo_unexecuted_blocks=1
00:07:58.647  		
00:07:58.647  		'
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:07:58.647     18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:07:58.647     18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:07:58.647     18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob
00:07:58.647     18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:58.647     18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:58.647     18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:58.647      18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:58.647      18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:58.647      18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:58.647      18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH
00:07:58.647      18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:07:58.647  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:07:58.647    18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:07:58.647  Cannot find device "nvmf_init_br"
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:07:58.647  Cannot find device "nvmf_init_br2"
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true
00:07:58.647   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:07:58.906  Cannot find device "nvmf_tgt_br"
00:07:58.906   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # true
00:07:58.906   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:07:58.906  Cannot find device "nvmf_tgt_br2"
00:07:58.906   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # true
00:07:58.906   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:07:58.906  Cannot find device "nvmf_init_br"
00:07:58.906   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # true
00:07:58.906   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:07:58.906  Cannot find device "nvmf_init_br2"
00:07:58.906   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # true
00:07:58.906   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:07:58.906  Cannot find device "nvmf_tgt_br"
00:07:58.906   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # true
00:07:58.906   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:07:58.906  Cannot find device "nvmf_tgt_br2"
00:07:58.906   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # true
00:07:58.906   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:07:58.906  Cannot find device "nvmf_br"
00:07:58.906   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # true
00:07:58.906   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:07:58.906  Cannot find device "nvmf_init_if"
00:07:58.906   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # true
00:07:58.906   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:07:58.906  Cannot find device "nvmf_init_if2"
00:07:58.906   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # true
00:07:58.906   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:07:58.906  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:07:58.906   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # true
00:07:58.907   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:07:58.907  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:07:58.907   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # true
00:07:58.907   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:07:58.907   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:07:58.907   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:07:58.907   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:07:58.907   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:07:58.907   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:07:58.907   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:07:58.907   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:07:58.907   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:07:58.907   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:07:58.907   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:07:58.907   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:07:58.907   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:07:59.166  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:07:59.166  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms
00:07:59.166  
00:07:59.166  --- 10.0.0.3 ping statistics ---
00:07:59.166  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:07:59.166  rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:07:59.166  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:07:59.166  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms
00:07:59.166  
00:07:59.166  --- 10.0.0.4 ping statistics ---
00:07:59.166  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:07:59.166  rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:07:59.166  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:07:59.166  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms
00:07:59.166  
00:07:59.166  --- 10.0.0.1 ping statistics ---
00:07:59.166  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:07:59.166  rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:07:59.166  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:07:59.166  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms
00:07:59.166  
00:07:59.166  --- 10.0.0.2 ping statistics ---
00:07:59.166  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:07:59.166  rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@461 -- # return 0
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=76720
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 76720
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 76720 ']'
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:59.166  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:59.166   18:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:07:59.424  [2024-12-13 18:52:31.031370] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:07:59.424  [2024-12-13 18:52:31.031465] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:07:59.424  [2024-12-13 18:52:31.186762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:07:59.424  [2024-12-13 18:52:31.226740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:07:59.424  [2024-12-13 18:52:31.226807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:07:59.424  [2024-12-13 18:52:31.226821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:07:59.424  [2024-12-13 18:52:31.226832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:07:59.424  [2024-12-13 18:52:31.226842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:07:59.424  [2024-12-13 18:52:31.228119] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:07:59.424  [2024-12-13 18:52:31.228293] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:07:59.424  [2024-12-13 18:52:31.228314] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:07:59.682  [2024-12-13 18:52:31.422033] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:07:59.682  Malloc0
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:07:59.682  Delay0
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:59.682   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:07:59.682  [2024-12-13 18:52:31.497122] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:07:59.683   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:59.683   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420
00:07:59.683   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:59.683   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:07:59.941   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:59.941   18:52:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128
00:07:59.941  [2024-12-13 18:52:31.697138] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral
00:08:02.474  Initializing NVMe Controllers
00:08:02.474  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0
00:08:02.474  controller IO queue size 128 less than required
00:08:02.474  Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver.
00:08:02.474  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0
00:08:02.474  Initialization complete. Launching workers.
00:08:02.474  NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31353
00:08:02.474  CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31414, failed to submit 62
00:08:02.474  	 success 31357, unsuccessful 57, failed 0
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20}
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:08:02.474  rmmod nvme_tcp
00:08:02.474  rmmod nvme_fabrics
00:08:02.474  rmmod nvme_keyring
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 76720 ']'
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 76720
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 76720 ']'
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 76720
00:08:02.474    18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:02.474    18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76720
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:08:02.474  killing process with pid 76720
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76720'
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 76720
00:08:02.474   18:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 76720
00:08:02.474   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:08:02.474   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:08:02.475   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:08:02.475   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr
00:08:02.475   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save
00:08:02.475   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore
00:08:02.475   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:08:02.475   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:08:02.475   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:08:02.475   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:08:02.475   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:08:02.475   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:08:02.475   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:08:02.475   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:08:02.475   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:08:02.475   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:08:02.475   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:08:02.475   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:08:02.733   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:08:02.733   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:08:02.733   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:08:02.733   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:08:02.733   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns
00:08:02.733   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:08:02.733   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:08:02.733    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:08:02.733   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # return 0
00:08:02.733  
00:08:02.733  real	0m4.192s
00:08:02.733  user	0m10.731s
00:08:02.733  sys	0m1.107s
00:08:02.733   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:02.733  ************************************
00:08:02.733  END TEST nvmf_abort
00:08:02.733  ************************************
00:08:02.733   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:08:02.733   18:52:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp
00:08:02.733   18:52:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:08:02.733   18:52:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:02.733   18:52:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:08:02.733  ************************************
00:08:02.733  START TEST nvmf_ns_hotplug_stress
00:08:02.733  ************************************
00:08:02.733   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp
00:08:02.733  * Looking for test storage...
00:08:02.993  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:02.993     18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:02.993     18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-:
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-:
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<'
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:02.993     18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1
00:08:02.993     18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1
00:08:02.993     18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:02.993     18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1
00:08:02.993     18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2
00:08:02.993     18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2
00:08:02.993     18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:02.993     18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:02.993  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:02.993  		--rc genhtml_branch_coverage=1
00:08:02.993  		--rc genhtml_function_coverage=1
00:08:02.993  		--rc genhtml_legend=1
00:08:02.993  		--rc geninfo_all_blocks=1
00:08:02.993  		--rc geninfo_unexecuted_blocks=1
00:08:02.993  		
00:08:02.993  		'
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:02.993  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:02.993  		--rc genhtml_branch_coverage=1
00:08:02.993  		--rc genhtml_function_coverage=1
00:08:02.993  		--rc genhtml_legend=1
00:08:02.993  		--rc geninfo_all_blocks=1
00:08:02.993  		--rc geninfo_unexecuted_blocks=1
00:08:02.993  		
00:08:02.993  		'
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:02.993  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:02.993  		--rc genhtml_branch_coverage=1
00:08:02.993  		--rc genhtml_function_coverage=1
00:08:02.993  		--rc genhtml_legend=1
00:08:02.993  		--rc geninfo_all_blocks=1
00:08:02.993  		--rc geninfo_unexecuted_blocks=1
00:08:02.993  		
00:08:02.993  		'
00:08:02.993    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:02.993  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:02.993  		--rc genhtml_branch_coverage=1
00:08:02.993  		--rc genhtml_function_coverage=1
00:08:02.993  		--rc genhtml_legend=1
00:08:02.993  		--rc geninfo_all_blocks=1
00:08:02.993  		--rc geninfo_unexecuted_blocks=1
00:08:02.993  		
00:08:02.993  		'
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:08:02.994     18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:08:02.994     18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:08:02.994     18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob
00:08:02.994     18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:08:02.994     18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:08:02.994     18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:08:02.994      18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:02.994      18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:02.994      18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:02.994      18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH
00:08:02.994      18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:08:02.994  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:08:02.994    18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:08:02.994  Cannot find device "nvmf_init_br"
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:08:02.994  Cannot find device "nvmf_init_br2"
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:08:02.994  Cannot find device "nvmf_tgt_br"
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:08:02.994  Cannot find device "nvmf_tgt_br2"
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:08:02.994  Cannot find device "nvmf_init_br"
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:08:02.994  Cannot find device "nvmf_init_br2"
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:08:02.994  Cannot find device "nvmf_tgt_br"
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:08:02.994  Cannot find device "nvmf_tgt_br2"
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:08:02.994  Cannot find device "nvmf_br"
00:08:02.994   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true
00:08:02.995   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:08:03.253  Cannot find device "nvmf_init_if"
00:08:03.253   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true
00:08:03.253   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:08:03.253  Cannot find device "nvmf_init_if2"
00:08:03.253   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true
00:08:03.253   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:08:03.253  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:08:03.253   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:08:03.254  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:08:03.254   18:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:08:03.254   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:08:03.254   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:08:03.254   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:08:03.254   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:08:03.254   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:08:03.254   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:08:03.254   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:08:03.254   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:08:03.254   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:08:03.254   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:08:03.514  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:08:03.514  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms
00:08:03.514  
00:08:03.514  --- 10.0.0.3 ping statistics ---
00:08:03.514  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:08:03.514  rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:08:03.514  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:08:03.514  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms
00:08:03.514  
00:08:03.514  --- 10.0.0.4 ping statistics ---
00:08:03.514  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:08:03.514  rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:08:03.514  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:08:03.514  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms
00:08:03.514  
00:08:03.514  --- 10.0.0.1 ping statistics ---
00:08:03.514  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:08:03.514  rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:08:03.514  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:08:03.514  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms
00:08:03.514  
00:08:03.514  --- 10.0.0.2 ping statistics ---
00:08:03.514  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:08:03.514  rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=77004
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 77004
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 77004 ']'
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:03.514  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:03.514   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:08:03.514  [2024-12-13 18:52:35.203731] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:08:03.514  [2024-12-13 18:52:35.203823] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:08:03.773  [2024-12-13 18:52:35.359831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:08:03.773  [2024-12-13 18:52:35.415184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:08:03.773  [2024-12-13 18:52:35.415269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:08:03.773  [2024-12-13 18:52:35.415285] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:08:03.773  [2024-12-13 18:52:35.415297] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:08:03.773  [2024-12-13 18:52:35.415306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:08:03.773  [2024-12-13 18:52:35.416815] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:08:03.773  [2024-12-13 18:52:35.416958] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:08:03.773  [2024-12-13 18:52:35.416968] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:08:03.773   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:03.773   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0
00:08:03.773   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:08:03.773   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:03.773   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:08:04.031   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:08:04.031   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000
00:08:04.031   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:08:04.289  [2024-12-13 18:52:35.914422] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:08:04.289   18:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:08:04.549   18:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:08:04.807  [2024-12-13 18:52:36.432571] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:08:04.807   18:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420
00:08:05.066   18:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0
00:08:05.325  Malloc0
00:08:05.325   18:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:08:05.325  Delay0
00:08:05.584   18:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:05.584   18:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512
00:08:05.842  NULL1
00:08:05.842   18:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1
00:08:06.101   18:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000
00:08:06.101   18:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=77127
00:08:06.101   18:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:06.101   18:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:07.518  Read completed with error (sct=0, sc=11)
00:08:07.518   18:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:07.518  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:08:07.518  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:08:07.518  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:08:07.518  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:08:07.518  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:08:07.518  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:08:07.518   18:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001
00:08:07.518   18:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001
00:08:07.777  true
00:08:07.777   18:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:07.777   18:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:08.714   18:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:08.972   18:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002
00:08:08.972   18:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002
00:08:08.972  true
00:08:08.972   18:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:08.972   18:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:09.231   18:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:09.489   18:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003
00:08:09.490   18:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003
00:08:09.748  true
00:08:09.748   18:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:09.748   18:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:10.685  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:08:10.685   18:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:10.943   18:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004
00:08:10.943   18:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004
00:08:11.202  true
00:08:11.202   18:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:11.202   18:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:11.461   18:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:11.720   18:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005
00:08:11.720   18:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005
00:08:11.720  true
00:08:11.977   18:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:11.977   18:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:12.545   18:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:12.803   18:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006
00:08:12.803   18:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006
00:08:13.062  true
00:08:13.062   18:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:13.062   18:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:13.321   18:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:13.579   18:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007
00:08:13.579   18:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007
00:08:13.837  true
00:08:14.095   18:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:14.095   18:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:14.095   18:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:14.662   18:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008
00:08:14.662   18:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008
00:08:14.662  true
00:08:14.662   18:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:14.662   18:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:15.597   18:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:15.855   18:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009
00:08:15.855   18:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009
00:08:16.113  true
00:08:16.113   18:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:16.113   18:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:16.372   18:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:16.630   18:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010
00:08:16.630   18:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010
00:08:16.889  true
00:08:16.889   18:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:16.889   18:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:17.147   18:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:17.409   18:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011
00:08:17.409   18:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011
00:08:17.681  true
00:08:17.681   18:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:17.681   18:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:18.622   18:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:18.880   18:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012
00:08:18.880   18:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012
00:08:19.138  true
00:08:19.138   18:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:19.138   18:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:19.396   18:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:19.654   18:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013
00:08:19.654   18:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013
00:08:19.912  true
00:08:19.912   18:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:19.912   18:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:20.848   18:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:20.848   18:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014
00:08:20.848   18:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014
00:08:21.106  true
00:08:21.106   18:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:21.106   18:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:21.364   18:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:21.623   18:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015
00:08:21.623   18:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015
00:08:21.882  true
00:08:21.882   18:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:21.882   18:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:22.818   18:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:23.076   18:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016
00:08:23.077   18:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016
00:08:23.077  true
00:08:23.335   18:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:23.335   18:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:23.335   18:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:23.593   18:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017
00:08:23.593   18:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017
00:08:23.851  true
00:08:23.851   18:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:23.851   18:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:24.786   18:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:25.044   18:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018
00:08:25.044   18:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018
00:08:25.044  true
00:08:25.302   18:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:25.302   18:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:25.561   18:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:25.820   18:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019
00:08:25.820   18:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019
00:08:26.078  true
00:08:26.078   18:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:26.078   18:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:26.645   18:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:26.904   18:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020
00:08:26.904   18:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020
00:08:27.162  true
00:08:27.162   18:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:27.163   18:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:27.432   18:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:27.711   18:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021
00:08:27.711   18:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021
00:08:27.970  true
00:08:27.970   18:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:27.970   18:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:28.905   18:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:28.905  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:08:28.905   18:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022
00:08:28.905   18:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022
00:08:29.163  true
00:08:29.163   18:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:29.163   18:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:29.422   18:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:29.680   18:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023
00:08:29.680   18:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023
00:08:29.939  true
00:08:29.939   18:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:29.939   18:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:30.874   18:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:30.874   18:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024
00:08:30.874   18:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024
00:08:31.133  true
00:08:31.133   18:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:31.133   18:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:31.391   18:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:31.649   18:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025
00:08:31.649   18:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025
00:08:31.908  true
00:08:31.908   18:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:31.908   18:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:32.844   18:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:33.102   18:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026
00:08:33.102   18:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026
00:08:33.361  true
00:08:33.361   18:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:33.361   18:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:33.619   18:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:33.619   18:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027
00:08:33.619   18:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027
00:08:33.877  true
00:08:34.136   18:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:34.136   18:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:34.703   18:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:34.962   18:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028
00:08:34.962   18:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028
00:08:35.221  true
00:08:35.221   18:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:35.221   18:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:35.480   18:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:35.737   18:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029
00:08:35.737   18:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029
00:08:35.996  true
00:08:35.996   18:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:35.996   18:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:36.931  Initializing NVMe Controllers
00:08:36.931  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:08:36.931  Controller IO queue size 128, less than required.
00:08:36.931  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:08:36.931  Controller IO queue size 128, less than required.
00:08:36.931  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:08:36.931  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:08:36.931  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:08:36.931  Initialization complete. Launching workers.
00:08:36.931  ========================================================
00:08:36.931                                                                                                               Latency(us)
00:08:36.931  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:08:36.931  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:     342.70       0.17  200243.31    2862.60 1072730.38
00:08:36.931  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:   12037.37       5.88   10633.24    3549.09  581172.62
00:08:36.931  ========================================================
00:08:36.931  Total                                                                    :   12380.07       6.04   15881.95    2862.60 1072730.38
00:08:36.931  
00:08:36.931   18:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:37.190   18:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030
00:08:37.190   18:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030
00:08:37.190  true
00:08:37.190   18:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 77127
00:08:37.190  /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (77127) - No such process
00:08:37.190   18:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 77127
00:08:37.190   18:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:37.757   18:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:08:37.757   18:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8
00:08:37.757   18:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=()
00:08:37.757   18:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 ))
00:08:37.757   18:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:08:37.757   18:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096
00:08:38.016  null0
00:08:38.016   18:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:08:38.016   18:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:08:38.016   18:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096
00:08:38.274  null1
00:08:38.274   18:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:08:38.274   18:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:08:38.274   18:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096
00:08:38.532  null2
00:08:38.532   18:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:08:38.532   18:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:08:38.532   18:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096
00:08:38.791  null3
00:08:38.791   18:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:08:38.791   18:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:08:38.791   18:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096
00:08:39.050  null4
00:08:39.050   18:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:08:39.050   18:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:08:39.050   18:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096
00:08:39.308  null5
00:08:39.308   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:08:39.308   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:08:39.308   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096
00:08:39.567  null6
00:08:39.567   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:08:39.567   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:08:39.567   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096
00:08:39.826  null7
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 ))
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:08:39.826   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 78192 78193 78196 78197 78199 78201 78202 78206
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:39.827   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:08:40.085   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:08:40.085   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:08:40.085   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:40.085   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:08:40.085   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:08:40.344   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:08:40.344   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:08:40.344   18:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:08:40.344   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:40.344   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:40.344   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:08:40.344   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:40.344   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:40.344   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:08:40.344   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:40.344   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:40.344   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:08:40.344   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:40.344   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:40.344   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:08:40.603   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:40.603   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:40.603   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:08:40.603   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:40.603   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:40.603   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:08:40.603   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:40.603   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:40.603   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:08:40.603   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:40.603   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:40.603   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:08:40.603   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:40.603   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:08:40.603   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:08:40.862   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:08:40.862   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:08:40.862   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:08:40.862   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:08:40.862   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:08:40.862   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:40.862   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:40.862   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:08:40.862   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:40.862   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:40.862   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:08:41.121   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:41.121   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:41.121   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:08:41.121   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:41.121   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:41.121   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:08:41.121   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:41.121   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:41.121   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:08:41.121   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:41.121   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:41.121   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:08:41.121   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:41.121   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:41.121   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:08:41.121   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:41.121   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:41.121   18:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:08:41.381   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:08:41.381   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:08:41.381   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:41.381   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:08:41.381   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:08:41.381   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:08:41.381   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:08:41.381   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:08:41.650   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:41.650   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:41.650   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:08:41.650   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:41.650   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:41.650   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:08:41.650   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:41.650   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:41.650   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:08:41.650   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:41.650   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:41.650   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:08:41.650   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:41.650   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:41.650   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:08:41.650   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:41.650   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:41.650   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:08:41.650   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:41.650   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:41.650   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:08:41.922   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:41.922   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:41.922   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:08:41.922   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:08:41.922   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:41.922   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:08:41.922   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:08:41.922   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:08:41.922   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:08:42.181   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:08:42.181   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:08:42.181   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:42.181   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:42.181   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:08:42.181   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:42.181   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:42.181   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:08:42.181   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:42.181   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:42.181   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:08:42.181   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:42.181   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:42.181   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:08:42.181   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:42.181   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:42.181   18:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:08:42.440   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:42.440   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:42.440   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:08:42.440   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:42.440   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:42.440   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:08:42.440   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:08:42.440   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:42.440   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:42.440   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:08:42.440   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:42.440   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:08:42.440   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:08:42.699   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:08:42.699   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:08:42.699   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:08:42.699   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:42.699   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:42.699   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:08:42.699   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:42.699   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:42.699   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:08:42.699   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:08:42.958   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:42.958   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:42.958   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:08:42.958   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:42.958   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:42.958   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:08:42.958   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:42.958   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:42.958   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:08:42.958   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:42.958   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:42.958   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:08:42.959   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:42.959   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:42.959   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:08:42.959   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:42.959   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:08:42.959   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:42.959   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:42.959   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:08:43.217   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:08:43.217   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:08:43.217   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:08:43.217   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:08:43.217   18:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:08:43.217   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:08:43.217   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:43.217   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:43.217   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:08:43.483   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:43.483   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:43.483   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:08:43.483   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:43.483   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:43.483   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:08:43.483   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:43.483   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:43.483   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:08:43.483   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:43.483   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:43.483   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:08:43.483   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:43.483   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:43.483   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:08:43.483   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:08:43.483   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:43.483   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:43.483   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:08:43.743   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:43.743   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:43.743   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:08:43.743   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:43.743   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:08:43.743   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:08:43.743   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:08:43.743   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:08:43.743   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:08:44.001   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:44.001   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:44.001   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:08:44.001   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:08:44.001   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:44.001   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:44.001   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:08:44.001   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:44.001   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:44.001   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:08:44.001   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:44.001   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:44.001   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:08:44.001   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:44.001   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:44.001   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:08:44.001   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:44.001   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:44.001   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:08:44.260   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:44.260   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:44.260   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:08:44.260   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:08:44.260   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:08:44.260   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:44.260   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:44.260   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:44.260   18:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:08:44.260   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:08:44.519   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:08:44.519   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:08:44.519   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:08:44.519   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:44.519   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:44.519   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:08:44.519   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:44.519   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:44.519   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:08:44.519   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:44.519   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:44.519   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:08:44.519   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:08:44.519   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:44.519   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:44.519   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:08:44.777   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:44.777   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:44.777   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:08:44.777   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:44.777   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:44.777   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:08:44.777   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:44.777   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:44.777   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:08:44.777   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:44.777   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:08:44.777   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:44.777   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:44.777   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:08:44.777   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:08:44.778   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:08:45.036   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:08:45.036   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:08:45.036   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:08:45.036   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:45.036   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:45.036   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:08:45.036   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:08:45.036   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:45.036   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:45.036   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:08:45.295   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:45.295   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:45.295   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:08:45.295   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:45.295   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:45.295   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:08:45.295   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:45.295   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:45.295   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:08:45.295   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:45.295   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:45.295   18:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:08:45.295   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:45.295   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:45.295   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:08:45.295   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:08:45.295   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:08:45.295   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:45.295   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:45.295   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:08:45.554   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:08:45.554   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:08:45.554   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:08:45.554   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:08:45.554   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:08:45.554   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:45.554   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:45.554   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:45.554   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:45.812   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:45.812   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:45.812   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:08:45.812   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:45.812   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:45.812   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:45.812   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:45.812   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:45.812   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:45.812   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:45.812   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:46.071   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:08:46.071   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:08:46.071   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT
00:08:46.071   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini
00:08:46.071   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup
00:08:46.071   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync
00:08:46.071   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:08:46.071   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e
00:08:46.071   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20}
00:08:46.071   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:08:46.071  rmmod nvme_tcp
00:08:46.071  rmmod nvme_fabrics
00:08:46.071  rmmod nvme_keyring
00:08:46.071   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:08:46.071   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e
00:08:46.071   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0
00:08:46.071   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 77004 ']'
00:08:46.071   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 77004
00:08:46.071   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 77004 ']'
00:08:46.071   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 77004
00:08:46.071    18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname
00:08:46.071   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:46.071    18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77004
00:08:46.071  killing process with pid 77004
00:08:46.071   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:08:46.071   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:08:46.071   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77004'
00:08:46.071   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 77004
00:08:46.072   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 77004
00:08:46.329   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:08:46.329   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:08:46.330   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:08:46.330   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr
00:08:46.330   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save
00:08:46.330   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore
00:08:46.330   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:08:46.330   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:08:46.330   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:08:46.330   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:08:46.330   18:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:08:46.330   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:08:46.330   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:08:46.330   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:08:46.330   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:08:46.330   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:08:46.330   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:08:46.330   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:08:46.330   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:08:46.330   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:08:46.330   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:08:46.588   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:08:46.588   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns
00:08:46.588   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:08:46.588   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:08:46.588    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:08:46.588   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0
00:08:46.588  
00:08:46.588  real	0m43.732s
00:08:46.588  user	3m31.423s
00:08:46.588  sys	0m13.071s
00:08:46.588   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:46.588   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:08:46.588  ************************************
00:08:46.588  END TEST nvmf_ns_hotplug_stress
00:08:46.588  ************************************
00:08:46.588   18:53:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp
00:08:46.588   18:53:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:08:46.588   18:53:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:46.588   18:53:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:08:46.588  ************************************
00:08:46.588  START TEST nvmf_delete_subsystem
00:08:46.588  ************************************
00:08:46.588   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp
00:08:46.588  * Looking for test storage...
00:08:46.588  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:08:46.588    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:46.588     18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version
00:08:46.588     18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-:
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-:
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<'
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:46.847     18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1
00:08:46.847     18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1
00:08:46.847     18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:46.847     18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1
00:08:46.847     18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2
00:08:46.847     18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2
00:08:46.847     18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:46.847     18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:46.847  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:46.847  		--rc genhtml_branch_coverage=1
00:08:46.847  		--rc genhtml_function_coverage=1
00:08:46.847  		--rc genhtml_legend=1
00:08:46.847  		--rc geninfo_all_blocks=1
00:08:46.847  		--rc geninfo_unexecuted_blocks=1
00:08:46.847  		
00:08:46.847  		'
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:46.847  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:46.847  		--rc genhtml_branch_coverage=1
00:08:46.847  		--rc genhtml_function_coverage=1
00:08:46.847  		--rc genhtml_legend=1
00:08:46.847  		--rc geninfo_all_blocks=1
00:08:46.847  		--rc geninfo_unexecuted_blocks=1
00:08:46.847  		
00:08:46.847  		'
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:46.847  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:46.847  		--rc genhtml_branch_coverage=1
00:08:46.847  		--rc genhtml_function_coverage=1
00:08:46.847  		--rc genhtml_legend=1
00:08:46.847  		--rc geninfo_all_blocks=1
00:08:46.847  		--rc geninfo_unexecuted_blocks=1
00:08:46.847  		
00:08:46.847  		'
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:46.847  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:46.847  		--rc genhtml_branch_coverage=1
00:08:46.847  		--rc genhtml_function_coverage=1
00:08:46.847  		--rc genhtml_legend=1
00:08:46.847  		--rc geninfo_all_blocks=1
00:08:46.847  		--rc geninfo_unexecuted_blocks=1
00:08:46.847  		
00:08:46.847  		'
00:08:46.847   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:08:46.847     18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:08:46.847     18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:08:46.847    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:08:46.847     18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob
00:08:46.847     18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:08:46.847     18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:08:46.847     18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:08:46.847      18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:46.847      18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:46.848      18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:46.848      18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH
00:08:46.848      18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:46.848    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0
00:08:46.848    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:08:46.848    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:08:46.848    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:08:46.848    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:08:46.848    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:08:46.848    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:08:46.848  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:08:46.848    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:08:46.848    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:08:46.848    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:08:46.848    18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:08:46.848  Cannot find device "nvmf_init_br"
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:08:46.848  Cannot find device "nvmf_init_br2"
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:08:46.848  Cannot find device "nvmf_tgt_br"
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:08:46.848  Cannot find device "nvmf_tgt_br2"
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:08:46.848  Cannot find device "nvmf_init_br"
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:08:46.848  Cannot find device "nvmf_init_br2"
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:08:46.848  Cannot find device "nvmf_tgt_br"
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:08:46.848  Cannot find device "nvmf_tgt_br2"
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:08:46.848  Cannot find device "nvmf_br"
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:08:46.848  Cannot find device "nvmf_init_if"
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:08:46.848  Cannot find device "nvmf_init_if2"
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:08:46.848  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:08:46.848  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:08:46.848   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:08:47.107  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:08:47.107  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms
00:08:47.107  
00:08:47.107  --- 10.0.0.3 ping statistics ---
00:08:47.107  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:08:47.107  rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:08:47.107  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:08:47.107  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms
00:08:47.107  
00:08:47.107  --- 10.0.0.4 ping statistics ---
00:08:47.107  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:08:47.107  rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:08:47.107  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:08:47.107  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms
00:08:47.107  
00:08:47.107  --- 10.0.0.1 ping statistics ---
00:08:47.107  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:08:47.107  rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:08:47.107  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:08:47.107  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms
00:08:47.107  
00:08:47.107  --- 10.0.0.2 ping statistics ---
00:08:47.107  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:08:47.107  rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:08:47.107   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0
00:08:47.108   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:08:47.108   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:08:47.108   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:08:47.108   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:08:47.108   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:08:47.108   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:08:47.108   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:08:47.108   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3
00:08:47.108   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:08:47.108   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:47.108   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:47.108   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=79587
00:08:47.108   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3
00:08:47.108   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 79587
00:08:47.108   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 79587 ']'
00:08:47.108   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:47.108   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:47.108  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:47.108   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:47.108   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:47.108   18:53:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:47.108  [2024-12-13 18:53:18.928770] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:08:47.366  [2024-12-13 18:53:18.929499] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:08:47.366  [2024-12-13 18:53:19.073787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:08:47.366  [2024-12-13 18:53:19.110919] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:08:47.366  [2024-12-13 18:53:19.110968] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:08:47.366  [2024-12-13 18:53:19.110994] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:08:47.366  [2024-12-13 18:53:19.111001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:08:47.366  [2024-12-13 18:53:19.111007] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:08:47.366  [2024-12-13 18:53:19.112187] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:08:47.366  [2024-12-13 18:53:19.112189] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:08:48.300   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:48.300   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0
00:08:48.300   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:08:48.300   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:48.300   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:48.300   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:08:48.300   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:08:48.300   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:48.300   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:48.300  [2024-12-13 18:53:19.951086] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:08:48.300   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:48.300   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:08:48.300   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:48.300   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:48.300   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:48.300   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:08:48.300   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:48.300   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:48.300  [2024-12-13 18:53:19.967480] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:08:48.300   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:48.300   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512
00:08:48.300   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:48.300   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:48.300  NULL1
00:08:48.300   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:48.301   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:08:48.301   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:48.301   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:48.301  Delay0
00:08:48.301   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:48.301   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:48.301   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:48.301   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:48.301   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:48.301   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=79638
00:08:48.301   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4
00:08:48.301   18:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2
00:08:48.558  [2024-12-13 18:53:20.182008] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:08:50.458   18:53:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:08:50.458   18:53:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:50.458   18:53:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  starting I/O failed: -6
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  starting I/O failed: -6
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  starting I/O failed: -6
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  starting I/O failed: -6
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  starting I/O failed: -6
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  starting I/O failed: -6
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  starting I/O failed: -6
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  starting I/O failed: -6
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  starting I/O failed: -6
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  starting I/O failed: -6
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  starting I/O failed: -6
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  starting I/O failed: -6
00:08:50.458  [2024-12-13 18:53:22.217519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b4a0 is same with the state(6) to be set
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Read completed with error (sct=0, sc=8)
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.458  Write completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  [2024-12-13 18:53:22.218677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5b3000d510 is same with the state(6) to be set
00:08:50.459  starting I/O failed: -6
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Write completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  Read completed with error (sct=0, sc=8)
00:08:50.459  starting I/O failed: -6
00:08:50.459  [2024-12-13 18:53:22.219588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5b30000c80 is same with the state(6) to be set
00:08:51.393  [2024-12-13 18:53:23.195634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa8ac0 is same with the state(6) to be set
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  [2024-12-13 18:53:23.217658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5b3000d060 is same with the state(6) to be set
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  [2024-12-13 18:53:23.218175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5b3000d840 is same with the state(6) to be set
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  [2024-12-13 18:53:23.219048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b2c0 is same with the state(6) to be set
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Write completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  Read completed with error (sct=0, sc=8)
00:08:51.651  [2024-12-13 18:53:23.219688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8d470 is same with the state(6) to be set
00:08:51.651  Initializing NVMe Controllers
00:08:51.651  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:08:51.651  Controller IO queue size 128, less than required.
00:08:51.651  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:08:51.651  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2
00:08:51.651  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3
00:08:51.651  Initialization complete. Launching workers.
00:08:51.651  ========================================================
00:08:51.651                                                                                                               Latency(us)
00:08:51.651  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:08:51.651  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:     173.82       0.08  888026.82     405.48 1011291.17
00:08:51.651  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:     174.81       0.09  951016.60     707.07 2002899.31
00:08:51.651  ========================================================
00:08:51.651  Total                                                                    :     348.64       0.17  919611.44     405.48 2002899.31
00:08:51.651  
00:08:51.651  [2024-12-13 18:53:23.220296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa8ac0 (9): Bad file descriptor
00:08:51.651  /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred
00:08:51.651   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:51.651   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0
00:08:51.651   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 79638
00:08:51.651   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5
00:08:51.909   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 ))
00:08:51.910   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 79638
00:08:51.910  /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (79638) - No such process
00:08:51.910   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 79638
00:08:51.910   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0
00:08:51.910   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 79638
00:08:51.910   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait
00:08:51.910   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:51.910    18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait
00:08:52.167   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:52.167   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 79638
00:08:52.167   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1
00:08:52.167   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:08:52.167   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:08:52.167   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:08:52.167   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:08:52.167   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:52.167   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:52.167   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:52.167   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:08:52.167   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:52.167   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:52.167  [2024-12-13 18:53:23.747611] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:08:52.167   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:52.167   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:08:52.167   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:52.167   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:52.167   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:52.167   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=79689
00:08:52.167   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4
00:08:52.167   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0
00:08:52.167   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 79689
00:08:52.167   18:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:08:52.167  [2024-12-13 18:53:23.936241] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:08:52.733   18:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:08:52.733   18:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 79689
00:08:52.733   18:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:08:52.991   18:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:08:52.992   18:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 79689
00:08:52.992   18:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:08:53.567   18:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:08:53.567   18:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 79689
00:08:53.567   18:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:08:54.149   18:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:08:54.149   18:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 79689
00:08:54.149   18:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:08:54.716   18:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:08:54.716   18:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 79689
00:08:54.716   18:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:08:54.974   18:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:08:54.974   18:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 79689
00:08:54.974   18:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:08:55.232  Initializing NVMe Controllers
00:08:55.232  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:08:55.232  Controller IO queue size 128, less than required.
00:08:55.232  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:08:55.232  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2
00:08:55.232  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3
00:08:55.232  Initialization complete. Launching workers.
00:08:55.232  ========================================================
00:08:55.232                                                                                                               Latency(us)
00:08:55.232  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:08:55.232  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:     128.00       0.06 1002847.07 1000150.53 1009685.27
00:08:55.233  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:     128.00       0.06 1005213.13 1000310.93 1041669.08
00:08:55.233  ========================================================
00:08:55.233  Total                                                                    :     256.00       0.12 1004030.10 1000150.53 1041669.08
00:08:55.233  
00:08:55.491   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:08:55.491   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 79689
00:08:55.491  /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (79689) - No such process
00:08:55.491   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 79689
00:08:55.491   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:08:55.491   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini
00:08:55.491   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup
00:08:55.491   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync
00:08:55.751   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:08:55.751   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e
00:08:55.751   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20}
00:08:55.751   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:08:55.751  rmmod nvme_tcp
00:08:55.751  rmmod nvme_fabrics
00:08:55.751  rmmod nvme_keyring
00:08:55.751   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:08:55.751   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e
00:08:55.751   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0
00:08:55.751   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 79587 ']'
00:08:55.751   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 79587
00:08:55.751   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 79587 ']'
00:08:55.751   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 79587
00:08:55.751    18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname
00:08:55.751   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:55.751    18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79587
00:08:55.751   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:55.751   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:55.751  killing process with pid 79587
00:08:55.751   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79587'
00:08:55.751   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 79587
00:08:55.751   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 79587
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:08:56.010   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:08:56.010    18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:08:56.270   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0
00:08:56.270  
00:08:56.270  real	0m9.570s
00:08:56.270  user	0m28.916s
00:08:56.270  sys	0m1.615s
00:08:56.270   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:56.270  ************************************
00:08:56.270   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:08:56.270  END TEST nvmf_delete_subsystem
00:08:56.270  ************************************
00:08:56.270   18:53:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp
00:08:56.270   18:53:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:08:56.270   18:53:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:56.270   18:53:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:08:56.270  ************************************
00:08:56.270  START TEST nvmf_host_management
00:08:56.270  ************************************
00:08:56.270   18:53:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp
00:08:56.270  * Looking for test storage...
00:08:56.270  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:08:56.270    18:53:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:56.270     18:53:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version
00:08:56.270     18:53:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-:
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-:
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<'
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:56.270     18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1
00:08:56.270     18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1
00:08:56.270     18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:56.270     18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1
00:08:56.270     18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2
00:08:56.270     18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2
00:08:56.270     18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:56.270     18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:56.270  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:56.270  		--rc genhtml_branch_coverage=1
00:08:56.270  		--rc genhtml_function_coverage=1
00:08:56.270  		--rc genhtml_legend=1
00:08:56.270  		--rc geninfo_all_blocks=1
00:08:56.270  		--rc geninfo_unexecuted_blocks=1
00:08:56.270  		
00:08:56.270  		'
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:56.270  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:56.270  		--rc genhtml_branch_coverage=1
00:08:56.270  		--rc genhtml_function_coverage=1
00:08:56.270  		--rc genhtml_legend=1
00:08:56.270  		--rc geninfo_all_blocks=1
00:08:56.270  		--rc geninfo_unexecuted_blocks=1
00:08:56.270  		
00:08:56.270  		'
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:56.270  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:56.270  		--rc genhtml_branch_coverage=1
00:08:56.270  		--rc genhtml_function_coverage=1
00:08:56.270  		--rc genhtml_legend=1
00:08:56.270  		--rc geninfo_all_blocks=1
00:08:56.270  		--rc geninfo_unexecuted_blocks=1
00:08:56.270  		
00:08:56.270  		'
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:56.270  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:56.270  		--rc genhtml_branch_coverage=1
00:08:56.270  		--rc genhtml_function_coverage=1
00:08:56.270  		--rc genhtml_legend=1
00:08:56.270  		--rc geninfo_all_blocks=1
00:08:56.270  		--rc geninfo_unexecuted_blocks=1
00:08:56.270  		
00:08:56.270  		'
00:08:56.270   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:08:56.270     18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:08:56.270     18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:08:56.270    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:08:56.270     18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob
00:08:56.270     18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:08:56.270     18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:08:56.270     18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:08:56.270      18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:56.270      18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:56.270      18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:56.270      18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH
00:08:56.271      18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:56.271    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0
00:08:56.271    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:08:56.271    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:08:56.271    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:08:56.271    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:08:56.271    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:08:56.271    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:08:56.271  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:08:56.271    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:08:56.271    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:08:56.271    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:08:56.271    18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:08:56.271   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:08:56.530  Cannot find device "nvmf_init_br"
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:08:56.530  Cannot find device "nvmf_init_br2"
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:08:56.530  Cannot find device "nvmf_tgt_br"
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:08:56.530  Cannot find device "nvmf_tgt_br2"
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:08:56.530  Cannot find device "nvmf_init_br"
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:08:56.530  Cannot find device "nvmf_init_br2"
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:08:56.530  Cannot find device "nvmf_tgt_br"
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:08:56.530  Cannot find device "nvmf_tgt_br2"
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:08:56.530  Cannot find device "nvmf_br"
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:08:56.530  Cannot find device "nvmf_init_if"
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:08:56.530  Cannot find device "nvmf_init_if2"
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:08:56.530  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:08:56.530  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:08:56.530   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:08:56.789  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:08:56.789  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms
00:08:56.789  
00:08:56.789  --- 10.0.0.3 ping statistics ---
00:08:56.789  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:08:56.789  rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms
00:08:56.789   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:08:56.789  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:08:56.789  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms
00:08:56.789  
00:08:56.789  --- 10.0.0.4 ping statistics ---
00:08:56.789  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:08:56.789  rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:08:56.790  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:08:56.790  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms
00:08:56.790  
00:08:56.790  --- 10.0.0.1 ping statistics ---
00:08:56.790  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:08:56.790  rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:08:56.790  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:08:56.790  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms
00:08:56.790  
00:08:56.790  --- 10.0.0.2 ping statistics ---
00:08:56.790  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:08:56.790  rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=79967
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 79967
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 79967 ']'
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:56.790  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:56.790   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:56.790  [2024-12-13 18:53:28.575750] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:08:56.790  [2024-12-13 18:53:28.575851] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:08:57.048  [2024-12-13 18:53:28.718143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:08:57.048  [2024-12-13 18:53:28.753478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:08:57.048  [2024-12-13 18:53:28.753738] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:08:57.049  [2024-12-13 18:53:28.753809] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:08:57.049  [2024-12-13 18:53:28.753890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:08:57.049  [2024-12-13 18:53:28.753984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:08:57.049  [2024-12-13 18:53:28.755152] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:08:57.049  [2024-12-13 18:53:28.755292] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:08:57.049  [2024-12-13 18:53:28.755475] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4
00:08:57.049  [2024-12-13 18:53:28.755479] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:08:57.307   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:57.307   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0
00:08:57.307   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:08:57.307   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:57.307   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:57.307   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:08:57.307   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:08:57.307   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.307   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:57.308  [2024-12-13 18:53:28.931188] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:08:57.308   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.308   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem
00:08:57.308   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:57.308   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:57.308   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt
00:08:57.308   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat
00:08:57.308   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd
00:08:57.308   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.308   18:53:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:57.308  Malloc0
00:08:57.308  [2024-12-13 18:53:29.008083] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:08:57.308   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.308   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems
00:08:57.308   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:57.308   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:57.308  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:08:57.308   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=80032
00:08:57.308   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 80032 /var/tmp/bdevperf.sock
00:08:57.308   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 80032 ']'
00:08:57.308   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:08:57.308   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:57.308   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:08:57.308   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:57.308   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:57.308   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10
00:08:57.308    18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0
00:08:57.308    18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=()
00:08:57.308    18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config
00:08:57.308    18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:08:57.308    18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:08:57.308  {
00:08:57.308    "params": {
00:08:57.308      "name": "Nvme$subsystem",
00:08:57.308      "trtype": "$TEST_TRANSPORT",
00:08:57.308      "traddr": "$NVMF_FIRST_TARGET_IP",
00:08:57.308      "adrfam": "ipv4",
00:08:57.308      "trsvcid": "$NVMF_PORT",
00:08:57.308      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:08:57.308      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:08:57.308      "hdgst": ${hdgst:-false},
00:08:57.308      "ddgst": ${ddgst:-false}
00:08:57.308    },
00:08:57.308    "method": "bdev_nvme_attach_controller"
00:08:57.308  }
00:08:57.308  EOF
00:08:57.308  )")
00:08:57.308     18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat
00:08:57.308    18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq .
00:08:57.308     18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=,
00:08:57.308     18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:08:57.308    "params": {
00:08:57.308      "name": "Nvme0",
00:08:57.308      "trtype": "tcp",
00:08:57.308      "traddr": "10.0.0.3",
00:08:57.308      "adrfam": "ipv4",
00:08:57.308      "trsvcid": "4420",
00:08:57.308      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:08:57.308      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:08:57.308      "hdgst": false,
00:08:57.308      "ddgst": false
00:08:57.308    },
00:08:57.308    "method": "bdev_nvme_attach_controller"
00:08:57.308  }'
00:08:57.308  [2024-12-13 18:53:29.114705] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:08:57.308  [2024-12-13 18:53:29.114940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80032 ]
00:08:57.567  [2024-12-13 18:53:29.264194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:57.567  [2024-12-13 18:53:29.303270] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:08:57.825  Running I/O for 10 seconds...
00:08:57.825   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:57.825   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0
00:08:57.825   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init
00:08:57.825   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.825   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:57.825   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.825   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:08:57.825   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1
00:08:57.825   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']'
00:08:57.825   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']'
00:08:57.825   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1
00:08:57.825   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i
00:08:57.825   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 ))
00:08:57.825   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 ))
00:08:57.825    18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1
00:08:57.825    18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops'
00:08:57.825    18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.825    18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:57.825    18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.825   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67
00:08:57.825   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']'
00:08:57.825   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25
00:08:58.084   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- ))
00:08:58.084   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 ))
00:08:58.084    18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1
00:08:58.084    18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops'
00:08:58.084    18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:58.084    18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:58.084    18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:58.344   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579
00:08:58.344   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']'
00:08:58.344   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0
00:08:58.344   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break
00:08:58.344   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0
00:08:58.344   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0
00:08:58.344   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:58.344   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:58.344  [2024-12-13 18:53:29.937640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.937998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.938006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.938014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.938021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.938029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.938037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.938044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.938052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.938060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.938068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.938075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.938082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.938090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.938098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.938105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.938113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.938121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.938129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.344  [2024-12-13 18:53:29.938137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.345  [2024-12-13 18:53:29.938144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.345  [2024-12-13 18:53:29.938152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.345  [2024-12-13 18:53:29.938159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.345  [2024-12-13 18:53:29.938167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.345  [2024-12-13 18:53:29.938175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.345  [2024-12-13 18:53:29.938182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.345  [2024-12-13 18:53:29.938190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.345  [2024-12-13 18:53:29.938198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.345  [2024-12-13 18:53:29.938205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.345  [2024-12-13 18:53:29.938213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3eb0 is same with the state(6) to be set
00:08:58.345  [2024-12-13 18:53:29.938356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.938981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.938990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.939000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.939009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.939020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.939033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.939045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.939053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.939064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.939074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.939085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.939094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.939105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.345  [2024-12-13 18:53:29.939114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.345  [2024-12-13 18:53:29.939124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:08:58.346  [2024-12-13 18:53:29.939771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.939781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc95820 is same with the state(6) to be set
00:08:58.346  [2024-12-13 18:53:29.941050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:08:58.346  task offset: 81920 on job bdev=Nvme0n1 fails
00:08:58.346  
00:08:58.346                                                                                                  Latency(us)
00:08:58.346  
[2024-12-13T18:53:30.170Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:08:58.346  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:08:58.346  Job: Nvme0n1 ended in about 0.45 seconds with error
00:08:58.346  	 Verification LBA range: start 0x0 length 0x400
00:08:58.346  	 Nvme0n1             :       0.45    1432.02      89.50     143.20     0.00   39028.72    5034.36   42896.29
00:08:58.346  
[2024-12-13T18:53:30.170Z]  ===================================================================================================================
00:08:58.346  
[2024-12-13T18:53:30.170Z]  Total                       :               1432.02      89.50     143.20     0.00   39028.72    5034.36   42896.29
00:08:58.346  [2024-12-13 18:53:29.943158] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:08:58.346  [2024-12-13 18:53:29.943190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa242d0 (9): Bad file descriptor
00:08:58.346  [2024-12-13 18:53:29.944779] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0'
00:08:58.346  [2024-12-13 18:53:29.944946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400
00:08:58.346  [2024-12-13 18:53:29.944977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:58.346  [2024-12-13 18:53:29.944997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0
00:08:58.346  [2024-12-13 18:53:29.945008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132
00:08:58.346  [2024-12-13 18:53:29.945018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:08:58.346  [2024-12-13 18:53:29.945027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa242d0
00:08:58.346   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:58.346  [2024-12-13 18:53:29.945066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa242d0 (9): Bad file descriptor
00:08:58.346  [2024-12-13 18:53:29.945085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:08:58.346  [2024-12-13 18:53:29.945095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:08:58.346  [2024-12-13 18:53:29.945105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:08:58.346  [2024-12-13 18:53:29.945116] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:08:58.346   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0
00:08:58.346   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:58.346   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:08:58.347   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:58.347   18:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1
00:08:59.286   18:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 80032
00:08:59.286  /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (80032) - No such process
00:08:59.286   18:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true
00:08:59.286   18:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004
00:08:59.286   18:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1
00:08:59.286    18:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0
00:08:59.286    18:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=()
00:08:59.286    18:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config
00:08:59.286    18:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:08:59.286    18:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:08:59.286  {
00:08:59.286    "params": {
00:08:59.286      "name": "Nvme$subsystem",
00:08:59.286      "trtype": "$TEST_TRANSPORT",
00:08:59.286      "traddr": "$NVMF_FIRST_TARGET_IP",
00:08:59.286      "adrfam": "ipv4",
00:08:59.286      "trsvcid": "$NVMF_PORT",
00:08:59.286      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:08:59.286      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:08:59.286      "hdgst": ${hdgst:-false},
00:08:59.286      "ddgst": ${ddgst:-false}
00:08:59.286    },
00:08:59.286    "method": "bdev_nvme_attach_controller"
00:08:59.286  }
00:08:59.286  EOF
00:08:59.286  )")
00:08:59.286     18:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat
00:08:59.286    18:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq .
00:08:59.286     18:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=,
00:08:59.286     18:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:08:59.286    "params": {
00:08:59.286      "name": "Nvme0",
00:08:59.286      "trtype": "tcp",
00:08:59.286      "traddr": "10.0.0.3",
00:08:59.286      "adrfam": "ipv4",
00:08:59.286      "trsvcid": "4420",
00:08:59.286      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:08:59.286      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:08:59.286      "hdgst": false,
00:08:59.286      "ddgst": false
00:08:59.286    },
00:08:59.286    "method": "bdev_nvme_attach_controller"
00:08:59.286  }'
00:08:59.286  [2024-12-13 18:53:31.024601] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:08:59.286  [2024-12-13 18:53:31.024724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80078 ]
00:08:59.545  [2024-12-13 18:53:31.174978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:59.545  [2024-12-13 18:53:31.206851] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:08:59.803  Running I/O for 1 seconds...
00:09:00.739       1664.00 IOPS,   104.00 MiB/s
00:09:00.739                                                                                                  Latency(us)
00:09:00.739  
[2024-12-13T18:53:32.563Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:00.739  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:09:00.739  	 Verification LBA range: start 0x0 length 0x400
00:09:00.739  	 Nvme0n1             :       1.01    1716.32     107.27       0.00     0.00   36591.92    5272.67   32887.16
00:09:00.739  
[2024-12-13T18:53:32.563Z]  ===================================================================================================================
00:09:00.739  
[2024-12-13T18:53:32.563Z]  Total                       :               1716.32     107.27       0.00     0.00   36591.92    5272.67   32887.16
00:09:00.998   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget
00:09:00.998   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state
00:09:00.998   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf
00:09:00.998   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt
00:09:00.998   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini
00:09:00.998   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup
00:09:00.998   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync
00:09:00.998   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:09:00.998   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e
00:09:00.998   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20}
00:09:00.998   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:09:00.998  rmmod nvme_tcp
00:09:00.998  rmmod nvme_fabrics
00:09:00.998  rmmod nvme_keyring
00:09:00.998   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:09:00.998   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e
00:09:00.998   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0
00:09:00.998   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 79967 ']'
00:09:00.998   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 79967
00:09:00.998   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 79967 ']'
00:09:00.998   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 79967
00:09:00.998    18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname
00:09:00.999   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:00.999    18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79967
00:09:00.999   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:09:00.999   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:09:00.999  killing process with pid 79967
00:09:00.999   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79967'
00:09:00.999   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 79967
00:09:00.999   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 79967
00:09:01.257  [2024-12-13 18:53:32.931751] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2
00:09:01.257   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:09:01.257   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:09:01.257   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:09:01.257   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr
00:09:01.257   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save
00:09:01.257   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:09:01.257   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore
00:09:01.257   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:09:01.257   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:09:01.257   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:09:01.257   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:09:01.257   18:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:09:01.257   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:09:01.257   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:09:01.257   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:09:01.257   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:09:01.257   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:09:01.257   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:09:01.516   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:09:01.516   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:09:01.516   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:09:01.516   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:09:01.516   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns
00:09:01.516   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:09:01.516   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:09:01.516    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:09:01.516   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0
00:09:01.516   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT
00:09:01.516  
00:09:01.516  real	0m5.312s
00:09:01.516  user	0m19.110s
00:09:01.516  sys	0m1.424s
00:09:01.516   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:01.516  ************************************
00:09:01.516  END TEST nvmf_host_management
00:09:01.516  ************************************
00:09:01.516   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:09:01.516   18:53:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp
00:09:01.516   18:53:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:09:01.516   18:53:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:01.516   18:53:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:09:01.516  ************************************
00:09:01.516  START TEST nvmf_lvol
00:09:01.516  ************************************
00:09:01.516   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp
00:09:01.516  * Looking for test storage...
00:09:01.516  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:09:01.516    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:01.516     18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version
00:09:01.516     18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-:
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-:
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<'
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:01.776     18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1
00:09:01.776     18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1
00:09:01.776     18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:01.776     18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1
00:09:01.776     18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2
00:09:01.776     18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2
00:09:01.776     18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:01.776     18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:01.776  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:01.776  		--rc genhtml_branch_coverage=1
00:09:01.776  		--rc genhtml_function_coverage=1
00:09:01.776  		--rc genhtml_legend=1
00:09:01.776  		--rc geninfo_all_blocks=1
00:09:01.776  		--rc geninfo_unexecuted_blocks=1
00:09:01.776  		
00:09:01.776  		'
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:01.776  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:01.776  		--rc genhtml_branch_coverage=1
00:09:01.776  		--rc genhtml_function_coverage=1
00:09:01.776  		--rc genhtml_legend=1
00:09:01.776  		--rc geninfo_all_blocks=1
00:09:01.776  		--rc geninfo_unexecuted_blocks=1
00:09:01.776  		
00:09:01.776  		'
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:01.776  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:01.776  		--rc genhtml_branch_coverage=1
00:09:01.776  		--rc genhtml_function_coverage=1
00:09:01.776  		--rc genhtml_legend=1
00:09:01.776  		--rc geninfo_all_blocks=1
00:09:01.776  		--rc geninfo_unexecuted_blocks=1
00:09:01.776  		
00:09:01.776  		'
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:01.776  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:01.776  		--rc genhtml_branch_coverage=1
00:09:01.776  		--rc genhtml_function_coverage=1
00:09:01.776  		--rc genhtml_legend=1
00:09:01.776  		--rc geninfo_all_blocks=1
00:09:01.776  		--rc geninfo_unexecuted_blocks=1
00:09:01.776  		
00:09:01.776  		'
00:09:01.776   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:09:01.776     18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:09:01.776    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:09:01.777    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:09:01.777    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:09:01.777     18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:09:01.777    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:09:01.777    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:09:01.777    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:09:01.777    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:09:01.777    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:09:01.777    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:09:01.777    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:09:01.777     18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob
00:09:01.777     18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:09:01.777     18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:09:01.777     18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:09:01.777      18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:01.777      18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:01.777      18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:01.777      18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH
00:09:01.777      18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:01.777    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0
00:09:01.777    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:09:01.777    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:09:01.777    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:09:01.777    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:09:01.777    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:09:01.777    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:09:01.777  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:09:01.777    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:09:01.777    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:09:01.777    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:09:01.777    18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:09:01.777  Cannot find device "nvmf_init_br"
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:09:01.777  Cannot find device "nvmf_init_br2"
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:09:01.777  Cannot find device "nvmf_tgt_br"
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:09:01.777  Cannot find device "nvmf_tgt_br2"
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:09:01.777  Cannot find device "nvmf_init_br"
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:09:01.777  Cannot find device "nvmf_init_br2"
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:09:01.777  Cannot find device "nvmf_tgt_br"
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:09:01.777  Cannot find device "nvmf_tgt_br2"
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:09:01.777  Cannot find device "nvmf_br"
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:09:01.777  Cannot find device "nvmf_init_if"
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:09:01.777  Cannot find device "nvmf_init_if2"
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:09:01.777  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:09:01.777  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true
00:09:01.777   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:09:01.778   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:09:02.037  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:09:02.037  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms
00:09:02.037  
00:09:02.037  --- 10.0.0.3 ping statistics ---
00:09:02.037  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:09:02.037  rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:09:02.037  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:09:02.037  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms
00:09:02.037  
00:09:02.037  --- 10.0.0.4 ping statistics ---
00:09:02.037  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:09:02.037  rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:09:02.037  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:09:02.037  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms
00:09:02.037  
00:09:02.037  --- 10.0.0.1 ping statistics ---
00:09:02.037  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:09:02.037  rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:09:02.037  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:09:02.037  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms
00:09:02.037  
00:09:02.037  --- 10.0.0.2 ping statistics ---
00:09:02.037  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:09:02.037  rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=80343
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 80343
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 80343 ']'
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:02.037  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:02.037   18:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:09:02.296  [2024-12-13 18:53:33.881127] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:09:02.296  [2024-12-13 18:53:33.881245] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:09:02.296  [2024-12-13 18:53:34.025788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:09:02.296  [2024-12-13 18:53:34.059670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:09:02.296  [2024-12-13 18:53:34.059752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:09:02.296  [2024-12-13 18:53:34.059778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:09:02.296  [2024-12-13 18:53:34.059793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:09:02.296  [2024-12-13 18:53:34.059799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:09:02.296  [2024-12-13 18:53:34.060974] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:09:02.296  [2024-12-13 18:53:34.061110] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:09:02.296  [2024-12-13 18:53:34.061107] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:09:02.554   18:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:02.554   18:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0
00:09:02.554   18:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:09:02.554   18:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:02.554   18:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:09:02.554   18:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:09:02.554   18:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:09:02.819  [2024-12-13 18:53:34.523062] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:09:02.819    18:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:09:03.080   18:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 '
00:09:03.080    18:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:09:03.645   18:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1
00:09:03.645   18:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1'
00:09:03.645    18:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs
00:09:03.903   18:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=60b46b16-22c3-427a-90d2-6a05165823da
00:09:03.903    18:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 60b46b16-22c3-427a-90d2-6a05165823da lvol 20
00:09:04.161   18:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a55a0e44-6af7-4f9b-a879-70b96c0c9869
00:09:04.161   18:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:09:04.728   18:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a55a0e44-6af7-4f9b-a879-70b96c0c9869
00:09:04.728   18:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420
00:09:04.986  [2024-12-13 18:53:36.744470] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:09:04.986   18:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420
00:09:05.245   18:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=80478
00:09:05.245   18:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18
00:09:05.245   18:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1
00:09:06.179    18:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot a55a0e44-6af7-4f9b-a879-70b96c0c9869 MY_SNAPSHOT
00:09:06.745   18:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8eb3aedd-0a57-4b62-8b35-afbe01d3dbfb
00:09:06.745   18:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize a55a0e44-6af7-4f9b-a879-70b96c0c9869 30
00:09:07.003    18:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 8eb3aedd-0a57-4b62-8b35-afbe01d3dbfb MY_CLONE
00:09:07.261   18:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f223ba93-a91a-4478-bc9a-78231316dbfd
00:09:07.261   18:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate f223ba93-a91a-4478-bc9a-78231316dbfd
00:09:07.827   18:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 80478
00:09:15.964  Initializing NVMe Controllers
00:09:15.964  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0
00:09:15.964  Controller IO queue size 128, less than required.
00:09:15.964  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:09:15.964  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3
00:09:15.964  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4
00:09:15.964  Initialization complete. Launching workers.
00:09:15.964  ========================================================
00:09:15.964                                                                                                               Latency(us)
00:09:15.964  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:09:15.964  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core  3:   11263.37      44.00   11365.62    1510.95   66463.83
00:09:15.964  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core  4:   11399.76      44.53   11228.52    3296.94   74132.49
00:09:15.964  ========================================================
00:09:15.964  Total                                                                    :   22663.13      88.53   11296.66    1510.95   74132.49
00:09:15.964  
00:09:15.964   18:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:09:15.964   18:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a55a0e44-6af7-4f9b-a879-70b96c0c9869
00:09:16.222   18:53:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 60b46b16-22c3-427a-90d2-6a05165823da
00:09:16.481   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f
00:09:16.481   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT
00:09:16.481   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini
00:09:16.481   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup
00:09:16.481   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync
00:09:16.481   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:09:16.481   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e
00:09:16.481   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20}
00:09:16.481   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:09:16.481  rmmod nvme_tcp
00:09:16.481  rmmod nvme_fabrics
00:09:16.481  rmmod nvme_keyring
00:09:16.481   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:09:16.481   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e
00:09:16.481   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0
00:09:16.481   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 80343 ']'
00:09:16.481   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 80343
00:09:16.481   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 80343 ']'
00:09:16.481   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 80343
00:09:16.481    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname
00:09:16.481   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:16.481    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80343
00:09:16.481  killing process with pid 80343
00:09:16.481   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:16.481   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:16.481   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80343'
00:09:16.481   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 80343
00:09:16.481   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 80343
00:09:16.740   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:09:16.740   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:09:16.740   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:09:16.740   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr
00:09:16.740   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:09:16.740   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save
00:09:16.740   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore
00:09:16.740   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:09:16.740   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:09:16.740   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:09:16.740   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:09:16.740   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:09:16.740   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:09:16.740   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:09:16.740   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:09:16.999   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:09:16.999   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:09:16.999   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:09:16.999   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:09:16.999   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:09:16.999   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:09:16.999   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:09:16.999   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns
00:09:16.999   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:09:16.999   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:09:16.999    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:09:16.999   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0
00:09:16.999  
00:09:16.999  real	0m15.462s
00:09:16.999  user	1m4.505s
00:09:16.999  sys	0m3.768s
00:09:16.999   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:16.999   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:09:16.999  ************************************
00:09:16.999  END TEST nvmf_lvol
00:09:16.999  ************************************
00:09:16.999   18:53:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp
00:09:16.999   18:53:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:09:16.999   18:53:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:16.999   18:53:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:09:16.999  ************************************
00:09:16.999  START TEST nvmf_lvs_grow
00:09:16.999  ************************************
00:09:16.999   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp
00:09:17.258  * Looking for test storage...
00:09:17.258  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:09:17.258    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:17.258     18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version
00:09:17.258     18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:17.258    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:17.258    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:17.258    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:17.258    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:17.258    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-:
00:09:17.258    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1
00:09:17.258    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-:
00:09:17.258    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2
00:09:17.258    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<'
00:09:17.258    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2
00:09:17.258    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1
00:09:17.258    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:17.258    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in
00:09:17.258    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1
00:09:17.258    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:17.258    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:17.258     18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1
00:09:17.258     18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1
00:09:17.258     18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:17.258     18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1
00:09:17.258    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1
00:09:17.258     18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2
00:09:17.258     18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2
00:09:17.258     18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:17.258     18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:17.259  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:17.259  		--rc genhtml_branch_coverage=1
00:09:17.259  		--rc genhtml_function_coverage=1
00:09:17.259  		--rc genhtml_legend=1
00:09:17.259  		--rc geninfo_all_blocks=1
00:09:17.259  		--rc geninfo_unexecuted_blocks=1
00:09:17.259  		
00:09:17.259  		'
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:17.259  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:17.259  		--rc genhtml_branch_coverage=1
00:09:17.259  		--rc genhtml_function_coverage=1
00:09:17.259  		--rc genhtml_legend=1
00:09:17.259  		--rc geninfo_all_blocks=1
00:09:17.259  		--rc geninfo_unexecuted_blocks=1
00:09:17.259  		
00:09:17.259  		'
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:17.259  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:17.259  		--rc genhtml_branch_coverage=1
00:09:17.259  		--rc genhtml_function_coverage=1
00:09:17.259  		--rc genhtml_legend=1
00:09:17.259  		--rc geninfo_all_blocks=1
00:09:17.259  		--rc geninfo_unexecuted_blocks=1
00:09:17.259  		
00:09:17.259  		'
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:17.259  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:17.259  		--rc genhtml_branch_coverage=1
00:09:17.259  		--rc genhtml_function_coverage=1
00:09:17.259  		--rc genhtml_legend=1
00:09:17.259  		--rc geninfo_all_blocks=1
00:09:17.259  		--rc geninfo_unexecuted_blocks=1
00:09:17.259  		
00:09:17.259  		'
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:09:17.259     18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:09:17.259     18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:09:17.259     18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob
00:09:17.259     18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:09:17.259     18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:09:17.259     18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:09:17.259      18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:17.259      18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:17.259      18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:17.259      18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH
00:09:17.259      18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:09:17.259  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:09:17.259    18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:09:17.259   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:09:17.260   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:09:17.260   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:09:17.260  Cannot find device "nvmf_init_br"
00:09:17.260   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true
00:09:17.260   18:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:09:17.260  Cannot find device "nvmf_init_br2"
00:09:17.260   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true
00:09:17.260   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:09:17.260  Cannot find device "nvmf_tgt_br"
00:09:17.260   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true
00:09:17.260   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:09:17.260  Cannot find device "nvmf_tgt_br2"
00:09:17.260   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true
00:09:17.260   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:09:17.260  Cannot find device "nvmf_init_br"
00:09:17.260   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true
00:09:17.260   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:09:17.260  Cannot find device "nvmf_init_br2"
00:09:17.260   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true
00:09:17.260   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:09:17.260  Cannot find device "nvmf_tgt_br"
00:09:17.260   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true
00:09:17.260   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:09:17.260  Cannot find device "nvmf_tgt_br2"
00:09:17.260   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true
00:09:17.260   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:09:17.518  Cannot find device "nvmf_br"
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:09:17.518  Cannot find device "nvmf_init_if"
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:09:17.518  Cannot find device "nvmf_init_if2"
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:09:17.518  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:09:17.518  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:09:17.518  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:09:17.518  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms
00:09:17.518  
00:09:17.518  --- 10.0.0.3 ping statistics ---
00:09:17.518  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:09:17.518  rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms
00:09:17.518   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:09:17.518  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:09:17.519  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms
00:09:17.519  
00:09:17.519  --- 10.0.0.4 ping statistics ---
00:09:17.519  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:09:17.519  rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms
00:09:17.519   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:09:17.519  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:09:17.519  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms
00:09:17.519  
00:09:17.519  --- 10.0.0.1 ping statistics ---
00:09:17.519  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:09:17.519  rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms
00:09:17.519   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:09:17.519  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:09:17.519  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms
00:09:17.519  
00:09:17.519  --- 10.0.0.2 ping statistics ---
00:09:17.519  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:09:17.519  rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms
00:09:17.519   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:09:17.519   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0
00:09:17.519   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:09:17.519   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:09:17.519   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:09:17.519   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:09:17.519   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:09:17.519   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:09:17.519   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:09:17.777   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1
00:09:17.777   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:09:17.777   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:17.777   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:09:17.777   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=80904
00:09:17.777   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1
00:09:17.777   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 80904
00:09:17.777   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 80904 ']'
00:09:17.777   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:17.777   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:17.777   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:17.777  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:17.777   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:17.777   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:09:17.777  [2024-12-13 18:53:49.425807] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:09:17.777  [2024-12-13 18:53:49.425898] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:09:17.777  [2024-12-13 18:53:49.575058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:18.036  [2024-12-13 18:53:49.607485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:09:18.036  [2024-12-13 18:53:49.607557] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:09:18.036  [2024-12-13 18:53:49.607583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:09:18.036  [2024-12-13 18:53:49.607591] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:09:18.036  [2024-12-13 18:53:49.607597] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:09:18.036  [2024-12-13 18:53:49.607985] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:09:18.036   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:18.036   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0
00:09:18.036   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:09:18.036   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:18.036   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:09:18.036   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:09:18.036   18:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:09:18.294  [2024-12-13 18:53:50.083986] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:09:18.294   18:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow
00:09:18.294   18:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:18.294   18:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:18.294   18:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:09:18.552  ************************************
00:09:18.552  START TEST lvs_grow_clean
00:09:18.552  ************************************
00:09:18.552   18:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow
00:09:18.552   18:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol
00:09:18.552   18:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters
00:09:18.552   18:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid
00:09:18.552   18:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200
00:09:18.552   18:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400
00:09:18.552   18:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150
00:09:18.552   18:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:09:18.552   18:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:09:18.552    18:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:09:18.810   18:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev
00:09:18.810    18:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs
00:09:19.068   18:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b23134d5-b8de-43be-a089-32d584d61baa
00:09:19.068    18:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b23134d5-b8de-43be-a089-32d584d61baa
00:09:19.068    18:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters'
00:09:19.327   18:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49
00:09:19.327   18:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 ))
00:09:19.327    18:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b23134d5-b8de-43be-a089-32d584d61baa lvol 150
00:09:19.585   18:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2f2240b1-42e6-4dc0-85b9-feb46fb1081c
00:09:19.585   18:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:09:19.585   18:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev
00:09:19.844  [2024-12-13 18:53:51.562448] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400
00:09:19.844  [2024-12-13 18:53:51.562526] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1
00:09:19.844  true
00:09:19.844    18:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b23134d5-b8de-43be-a089-32d584d61baa
00:09:19.844    18:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters'
00:09:20.102   18:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 ))
00:09:20.102   18:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:09:20.361   18:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2f2240b1-42e6-4dc0-85b9-feb46fb1081c
00:09:20.619   18:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420
00:09:20.878  [2024-12-13 18:53:52.583074] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:09:20.878   18:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420
00:09:21.136   18:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=81057
00:09:21.136   18:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z
00:09:21.137   18:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:09:21.137   18:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 81057 /var/tmp/bdevperf.sock
00:09:21.137   18:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 81057 ']'
00:09:21.137   18:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:09:21.137   18:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:21.137  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:09:21.137   18:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:09:21.137   18:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:21.137   18:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x
00:09:21.395  [2024-12-13 18:53:52.967391] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:09:21.395  [2024-12-13 18:53:52.967527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81057 ]
00:09:21.395  [2024-12-13 18:53:53.121710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:21.395  [2024-12-13 18:53:53.160388] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:09:21.654   18:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:21.654   18:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0
00:09:21.654   18:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0
00:09:21.912  Nvme0n1
00:09:21.912   18:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000
00:09:22.171  [
00:09:22.171    {
00:09:22.171      "aliases": [
00:09:22.171        "2f2240b1-42e6-4dc0-85b9-feb46fb1081c"
00:09:22.171      ],
00:09:22.171      "assigned_rate_limits": {
00:09:22.171        "r_mbytes_per_sec": 0,
00:09:22.171        "rw_ios_per_sec": 0,
00:09:22.171        "rw_mbytes_per_sec": 0,
00:09:22.171        "w_mbytes_per_sec": 0
00:09:22.171      },
00:09:22.171      "block_size": 4096,
00:09:22.171      "claimed": false,
00:09:22.171      "driver_specific": {
00:09:22.171        "mp_policy": "active_passive",
00:09:22.171        "nvme": [
00:09:22.171          {
00:09:22.171            "ctrlr_data": {
00:09:22.171              "ana_reporting": false,
00:09:22.171              "cntlid": 1,
00:09:22.171              "firmware_revision": "25.01",
00:09:22.171              "model_number": "SPDK bdev Controller",
00:09:22.171              "multi_ctrlr": true,
00:09:22.171              "oacs": {
00:09:22.171                "firmware": 0,
00:09:22.171                "format": 0,
00:09:22.171                "ns_manage": 0,
00:09:22.171                "security": 0
00:09:22.171              },
00:09:22.171              "serial_number": "SPDK0",
00:09:22.171              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:09:22.171              "vendor_id": "0x8086"
00:09:22.171            },
00:09:22.171            "ns_data": {
00:09:22.171              "can_share": true,
00:09:22.171              "id": 1
00:09:22.171            },
00:09:22.171            "trid": {
00:09:22.171              "adrfam": "IPv4",
00:09:22.171              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:09:22.171              "traddr": "10.0.0.3",
00:09:22.171              "trsvcid": "4420",
00:09:22.171              "trtype": "TCP"
00:09:22.171            },
00:09:22.171            "vs": {
00:09:22.171              "nvme_version": "1.3"
00:09:22.171            }
00:09:22.171          }
00:09:22.171        ]
00:09:22.171      },
00:09:22.171      "memory_domains": [
00:09:22.171        {
00:09:22.171          "dma_device_id": "system",
00:09:22.171          "dma_device_type": 1
00:09:22.171        }
00:09:22.171      ],
00:09:22.171      "name": "Nvme0n1",
00:09:22.171      "num_blocks": 38912,
00:09:22.171      "numa_id": -1,
00:09:22.171      "product_name": "NVMe disk",
00:09:22.171      "supported_io_types": {
00:09:22.171        "abort": true,
00:09:22.171        "compare": true,
00:09:22.171        "compare_and_write": true,
00:09:22.171        "copy": true,
00:09:22.171        "flush": true,
00:09:22.171        "get_zone_info": false,
00:09:22.171        "nvme_admin": true,
00:09:22.171        "nvme_io": true,
00:09:22.171        "nvme_io_md": false,
00:09:22.171        "nvme_iov_md": false,
00:09:22.171        "read": true,
00:09:22.171        "reset": true,
00:09:22.171        "seek_data": false,
00:09:22.171        "seek_hole": false,
00:09:22.171        "unmap": true,
00:09:22.171        "write": true,
00:09:22.171        "write_zeroes": true,
00:09:22.171        "zcopy": false,
00:09:22.171        "zone_append": false,
00:09:22.171        "zone_management": false
00:09:22.171      },
00:09:22.171      "uuid": "2f2240b1-42e6-4dc0-85b9-feb46fb1081c",
00:09:22.171      "zoned": false
00:09:22.171    }
00:09:22.171  ]
00:09:22.171   18:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=81086
00:09:22.171   18:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:09:22.171   18:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2
00:09:22.171  Running I/O for 10 seconds...
00:09:23.546                                                                                                  Latency(us)
00:09:23.546  
[2024-12-13T18:53:55.370Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:23.546  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:23.546  	 Nvme0n1             :       1.00    7223.00      28.21       0.00     0.00       0.00       0.00       0.00
00:09:23.546  
[2024-12-13T18:53:55.370Z]  ===================================================================================================================
00:09:23.546  
[2024-12-13T18:53:55.370Z]  Total                       :               7223.00      28.21       0.00     0.00       0.00       0.00       0.00
00:09:23.546  
00:09:24.112   18:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b23134d5-b8de-43be-a089-32d584d61baa
00:09:24.369  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:24.369  	 Nvme0n1             :       2.00    7204.50      28.14       0.00     0.00       0.00       0.00       0.00
00:09:24.369  
[2024-12-13T18:53:56.193Z]  ===================================================================================================================
00:09:24.369  
[2024-12-13T18:53:56.193Z]  Total                       :               7204.50      28.14       0.00     0.00       0.00       0.00       0.00
00:09:24.369  
00:09:24.369  true
00:09:24.369    18:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters'
00:09:24.369    18:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b23134d5-b8de-43be-a089-32d584d61baa
00:09:24.936   18:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99
00:09:24.936   18:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 ))
00:09:24.936   18:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 81086
00:09:25.195  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:25.195  	 Nvme0n1             :       3.00    7183.67      28.06       0.00     0.00       0.00       0.00       0.00
00:09:25.195  
[2024-12-13T18:53:57.019Z]  ===================================================================================================================
00:09:25.195  
[2024-12-13T18:53:57.019Z]  Total                       :               7183.67      28.06       0.00     0.00       0.00       0.00       0.00
00:09:25.195  
00:09:26.131  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:26.131  	 Nvme0n1             :       4.00    7130.00      27.85       0.00     0.00       0.00       0.00       0.00
00:09:26.131  
[2024-12-13T18:53:57.955Z]  ===================================================================================================================
00:09:26.131  
[2024-12-13T18:53:57.955Z]  Total                       :               7130.00      27.85       0.00     0.00       0.00       0.00       0.00
00:09:26.131  
00:09:27.505  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:27.505  	 Nvme0n1             :       5.00    7110.40      27.77       0.00     0.00       0.00       0.00       0.00
00:09:27.505  
[2024-12-13T18:53:59.329Z]  ===================================================================================================================
00:09:27.505  
[2024-12-13T18:53:59.329Z]  Total                       :               7110.40      27.77       0.00     0.00       0.00       0.00       0.00
00:09:27.505  
00:09:28.439  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:28.439  	 Nvme0n1             :       6.00    7086.83      27.68       0.00     0.00       0.00       0.00       0.00
00:09:28.439  
[2024-12-13T18:54:00.263Z]  ===================================================================================================================
00:09:28.439  
[2024-12-13T18:54:00.263Z]  Total                       :               7086.83      27.68       0.00     0.00       0.00       0.00       0.00
00:09:28.439  
00:09:29.374  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:29.375  	 Nvme0n1             :       7.00    6970.00      27.23       0.00     0.00       0.00       0.00       0.00
00:09:29.375  
[2024-12-13T18:54:01.199Z]  ===================================================================================================================
00:09:29.375  
[2024-12-13T18:54:01.199Z]  Total                       :               6970.00      27.23       0.00     0.00       0.00       0.00       0.00
00:09:29.375  
00:09:30.336  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:30.336  	 Nvme0n1             :       8.00    6936.75      27.10       0.00     0.00       0.00       0.00       0.00
00:09:30.336  
[2024-12-13T18:54:02.160Z]  ===================================================================================================================
00:09:30.336  
[2024-12-13T18:54:02.160Z]  Total                       :               6936.75      27.10       0.00     0.00       0.00       0.00       0.00
00:09:30.336  
00:09:31.278  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:31.278  	 Nvme0n1             :       9.00    6919.00      27.03       0.00     0.00       0.00       0.00       0.00
00:09:31.278  
[2024-12-13T18:54:03.102Z]  ===================================================================================================================
00:09:31.278  
[2024-12-13T18:54:03.102Z]  Total                       :               6919.00      27.03       0.00     0.00       0.00       0.00       0.00
00:09:31.278  
00:09:32.213  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:32.213  	 Nvme0n1             :      10.00    6910.80      27.00       0.00     0.00       0.00       0.00       0.00
00:09:32.213  
[2024-12-13T18:54:04.037Z]  ===================================================================================================================
00:09:32.213  
[2024-12-13T18:54:04.037Z]  Total                       :               6910.80      27.00       0.00     0.00       0.00       0.00       0.00
00:09:32.213  
00:09:32.213  
00:09:32.213                                                                                                  Latency(us)
00:09:32.213  
[2024-12-13T18:54:04.037Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:32.213  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:32.213  	 Nvme0n1             :      10.01    6917.76      27.02       0.00     0.00   18497.20    7983.48  141081.13
00:09:32.213  
[2024-12-13T18:54:04.037Z]  ===================================================================================================================
00:09:32.213  
[2024-12-13T18:54:04.037Z]  Total                       :               6917.76      27.02       0.00     0.00   18497.20    7983.48  141081.13
00:09:32.213  {
00:09:32.213    "results": [
00:09:32.213      {
00:09:32.213        "job": "Nvme0n1",
00:09:32.213        "core_mask": "0x2",
00:09:32.213        "workload": "randwrite",
00:09:32.213        "status": "finished",
00:09:32.213        "queue_depth": 128,
00:09:32.213        "io_size": 4096,
00:09:32.213        "runtime": 10.008448,
00:09:32.213        "iops": 6917.755879832717,
00:09:32.213        "mibps": 27.022483905596552,
00:09:32.213        "io_failed": 0,
00:09:32.213        "io_timeout": 0,
00:09:32.213        "avg_latency_us": 18497.199885293514,
00:09:32.213        "min_latency_us": 7983.476363636363,
00:09:32.213        "max_latency_us": 141081.13454545455
00:09:32.213      }
00:09:32.213    ],
00:09:32.213    "core_count": 1
00:09:32.213  }
00:09:32.213   18:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 81057
00:09:32.213   18:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 81057 ']'
00:09:32.213   18:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 81057
00:09:32.213    18:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname
00:09:32.213   18:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:32.213    18:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81057
00:09:32.213  killing process with pid 81057
00:09:32.213  Received shutdown signal, test time was about 10.000000 seconds
00:09:32.213  
00:09:32.213                                                                                                  Latency(us)
00:09:32.213  
[2024-12-13T18:54:04.037Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:32.213  
[2024-12-13T18:54:04.037Z]  ===================================================================================================================
00:09:32.213  
[2024-12-13T18:54:04.037Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:09:32.213   18:54:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:09:32.213   18:54:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:09:32.213   18:54:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81057'
00:09:32.213   18:54:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 81057
00:09:32.213   18:54:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 81057
00:09:32.472   18:54:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420
00:09:32.731   18:54:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:09:32.989    18:54:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b23134d5-b8de-43be-a089-32d584d61baa
00:09:32.989    18:54:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters'
00:09:33.247   18:54:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61
00:09:33.247   18:54:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]]
00:09:33.247   18:54:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:09:33.504  [2024-12-13 18:54:05.136948] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs
00:09:33.504   18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b23134d5-b8de-43be-a089-32d584d61baa
00:09:33.504   18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0
00:09:33.504   18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b23134d5-b8de-43be-a089-32d584d61baa
00:09:33.504   18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:09:33.504   18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:33.504    18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:09:33.504   18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:33.504    18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:09:33.504   18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:33.504   18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:09:33.504   18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:09:33.504   18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b23134d5-b8de-43be-a089-32d584d61baa
00:09:33.761  2024/12/13 18:54:05 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:b23134d5-b8de-43be-a089-32d584d61baa], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device
00:09:33.761  request:
00:09:33.761  {
00:09:33.761    "method": "bdev_lvol_get_lvstores",
00:09:33.761    "params": {
00:09:33.761      "uuid": "b23134d5-b8de-43be-a089-32d584d61baa"
00:09:33.761    }
00:09:33.761  }
00:09:33.761  Got JSON-RPC error response
00:09:33.761  GoRPCClient: error on JSON-RPC call
00:09:33.761   18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1
00:09:33.761   18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:33.761   18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:09:33.761   18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:33.761   18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:09:34.019  aio_bdev
00:09:34.019   18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2f2240b1-42e6-4dc0-85b9-feb46fb1081c
00:09:34.019   18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=2f2240b1-42e6-4dc0-85b9-feb46fb1081c
00:09:34.019   18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:09:34.019   18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i
00:09:34.019   18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:09:34.019   18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:09:34.019   18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine
00:09:34.278   18:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2f2240b1-42e6-4dc0-85b9-feb46fb1081c -t 2000
00:09:34.536  [
00:09:34.536    {
00:09:34.536      "aliases": [
00:09:34.536        "lvs/lvol"
00:09:34.536      ],
00:09:34.536      "assigned_rate_limits": {
00:09:34.536        "r_mbytes_per_sec": 0,
00:09:34.536        "rw_ios_per_sec": 0,
00:09:34.536        "rw_mbytes_per_sec": 0,
00:09:34.536        "w_mbytes_per_sec": 0
00:09:34.536      },
00:09:34.536      "block_size": 4096,
00:09:34.536      "claimed": false,
00:09:34.536      "driver_specific": {
00:09:34.536        "lvol": {
00:09:34.536          "base_bdev": "aio_bdev",
00:09:34.536          "clone": false,
00:09:34.536          "esnap_clone": false,
00:09:34.536          "lvol_store_uuid": "b23134d5-b8de-43be-a089-32d584d61baa",
00:09:34.536          "num_allocated_clusters": 38,
00:09:34.536          "snapshot": false,
00:09:34.536          "thin_provision": false
00:09:34.536        }
00:09:34.536      },
00:09:34.536      "name": "2f2240b1-42e6-4dc0-85b9-feb46fb1081c",
00:09:34.536      "num_blocks": 38912,
00:09:34.536      "product_name": "Logical Volume",
00:09:34.536      "supported_io_types": {
00:09:34.536        "abort": false,
00:09:34.536        "compare": false,
00:09:34.536        "compare_and_write": false,
00:09:34.536        "copy": false,
00:09:34.536        "flush": false,
00:09:34.536        "get_zone_info": false,
00:09:34.536        "nvme_admin": false,
00:09:34.536        "nvme_io": false,
00:09:34.536        "nvme_io_md": false,
00:09:34.536        "nvme_iov_md": false,
00:09:34.536        "read": true,
00:09:34.536        "reset": true,
00:09:34.536        "seek_data": true,
00:09:34.536        "seek_hole": true,
00:09:34.536        "unmap": true,
00:09:34.536        "write": true,
00:09:34.536        "write_zeroes": true,
00:09:34.536        "zcopy": false,
00:09:34.536        "zone_append": false,
00:09:34.536        "zone_management": false
00:09:34.536      },
00:09:34.536      "uuid": "2f2240b1-42e6-4dc0-85b9-feb46fb1081c",
00:09:34.536      "zoned": false
00:09:34.536    }
00:09:34.536  ]
00:09:34.536   18:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0
00:09:34.536    18:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b23134d5-b8de-43be-a089-32d584d61baa
00:09:34.536    18:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters'
00:09:34.795   18:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 ))
00:09:34.795    18:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b23134d5-b8de-43be-a089-32d584d61baa
00:09:34.795    18:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters'
00:09:35.054   18:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 ))
00:09:35.054   18:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2f2240b1-42e6-4dc0-85b9-feb46fb1081c
00:09:35.312   18:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b23134d5-b8de-43be-a089-32d584d61baa
00:09:35.571   18:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:09:35.829   18:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:09:36.088  ************************************
00:09:36.088  END TEST lvs_grow_clean
00:09:36.088  ************************************
00:09:36.088  
00:09:36.088  real	0m17.770s
00:09:36.088  user	0m16.950s
00:09:36.088  sys	0m2.144s
00:09:36.088   18:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:36.088   18:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x
00:09:36.346   18:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty
00:09:36.346   18:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:09:36.346   18:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:36.346   18:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:09:36.346  ************************************
00:09:36.346  START TEST lvs_grow_dirty
00:09:36.346  ************************************
00:09:36.346   18:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty
00:09:36.346   18:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol
00:09:36.346   18:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters
00:09:36.346   18:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid
00:09:36.346   18:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200
00:09:36.346   18:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400
00:09:36.346   18:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150
00:09:36.346   18:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:09:36.346   18:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:09:36.346    18:54:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:09:36.605   18:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev
00:09:36.605    18:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs
00:09:36.863   18:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=077ac129-6e7f-42bd-a2b7-b589a4e87d1e
00:09:36.863    18:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 077ac129-6e7f-42bd-a2b7-b589a4e87d1e
00:09:36.863    18:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters'
00:09:37.121   18:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49
00:09:37.121   18:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 ))
00:09:37.121    18:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 077ac129-6e7f-42bd-a2b7-b589a4e87d1e lvol 150
00:09:37.380   18:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=63db3081-ab5d-4b09-bb89-c6144fd21beb
00:09:37.380   18:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:09:37.380   18:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev
00:09:37.638  [2024-12-13 18:54:09.221996] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400
00:09:37.638  [2024-12-13 18:54:09.222079] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1
00:09:37.638  true
00:09:37.638    18:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 077ac129-6e7f-42bd-a2b7-b589a4e87d1e
00:09:37.638    18:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters'
00:09:37.897   18:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 ))
00:09:37.897   18:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:09:38.155   18:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 63db3081-ab5d-4b09-bb89-c6144fd21beb
00:09:38.414   18:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420
00:09:38.414  [2024-12-13 18:54:10.202552] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:09:38.414   18:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420
00:09:38.980   18:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=81484
00:09:38.980   18:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z
00:09:38.980   18:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:09:38.980   18:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 81484 /var/tmp/bdevperf.sock
00:09:38.980   18:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 81484 ']'
00:09:38.980   18:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:09:38.980   18:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:38.980  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:09:38.980   18:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:09:38.980   18:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:38.980   18:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:09:38.980  [2024-12-13 18:54:10.551231] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:09:38.980  [2024-12-13 18:54:10.551333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81484 ]
00:09:38.980  [2024-12-13 18:54:10.700854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:38.980  [2024-12-13 18:54:10.743024] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:09:39.916   18:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:39.916   18:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0
00:09:39.916   18:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0
00:09:40.174  Nvme0n1
00:09:40.174   18:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000
00:09:40.433  [
00:09:40.433    {
00:09:40.433      "aliases": [
00:09:40.433        "63db3081-ab5d-4b09-bb89-c6144fd21beb"
00:09:40.433      ],
00:09:40.433      "assigned_rate_limits": {
00:09:40.433        "r_mbytes_per_sec": 0,
00:09:40.433        "rw_ios_per_sec": 0,
00:09:40.433        "rw_mbytes_per_sec": 0,
00:09:40.433        "w_mbytes_per_sec": 0
00:09:40.433      },
00:09:40.433      "block_size": 4096,
00:09:40.433      "claimed": false,
00:09:40.433      "driver_specific": {
00:09:40.433        "mp_policy": "active_passive",
00:09:40.433        "nvme": [
00:09:40.433          {
00:09:40.433            "ctrlr_data": {
00:09:40.433              "ana_reporting": false,
00:09:40.433              "cntlid": 1,
00:09:40.433              "firmware_revision": "25.01",
00:09:40.433              "model_number": "SPDK bdev Controller",
00:09:40.433              "multi_ctrlr": true,
00:09:40.433              "oacs": {
00:09:40.433                "firmware": 0,
00:09:40.433                "format": 0,
00:09:40.433                "ns_manage": 0,
00:09:40.433                "security": 0
00:09:40.433              },
00:09:40.433              "serial_number": "SPDK0",
00:09:40.433              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:09:40.433              "vendor_id": "0x8086"
00:09:40.433            },
00:09:40.433            "ns_data": {
00:09:40.433              "can_share": true,
00:09:40.433              "id": 1
00:09:40.433            },
00:09:40.433            "trid": {
00:09:40.433              "adrfam": "IPv4",
00:09:40.433              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:09:40.433              "traddr": "10.0.0.3",
00:09:40.433              "trsvcid": "4420",
00:09:40.433              "trtype": "TCP"
00:09:40.433            },
00:09:40.433            "vs": {
00:09:40.433              "nvme_version": "1.3"
00:09:40.433            }
00:09:40.433          }
00:09:40.433        ]
00:09:40.433      },
00:09:40.433      "memory_domains": [
00:09:40.433        {
00:09:40.433          "dma_device_id": "system",
00:09:40.433          "dma_device_type": 1
00:09:40.433        }
00:09:40.433      ],
00:09:40.433      "name": "Nvme0n1",
00:09:40.433      "num_blocks": 38912,
00:09:40.433      "numa_id": -1,
00:09:40.433      "product_name": "NVMe disk",
00:09:40.433      "supported_io_types": {
00:09:40.433        "abort": true,
00:09:40.433        "compare": true,
00:09:40.433        "compare_and_write": true,
00:09:40.433        "copy": true,
00:09:40.433        "flush": true,
00:09:40.433        "get_zone_info": false,
00:09:40.433        "nvme_admin": true,
00:09:40.433        "nvme_io": true,
00:09:40.433        "nvme_io_md": false,
00:09:40.433        "nvme_iov_md": false,
00:09:40.433        "read": true,
00:09:40.433        "reset": true,
00:09:40.433        "seek_data": false,
00:09:40.433        "seek_hole": false,
00:09:40.433        "unmap": true,
00:09:40.433        "write": true,
00:09:40.433        "write_zeroes": true,
00:09:40.433        "zcopy": false,
00:09:40.433        "zone_append": false,
00:09:40.433        "zone_management": false
00:09:40.433      },
00:09:40.433      "uuid": "63db3081-ab5d-4b09-bb89-c6144fd21beb",
00:09:40.433      "zoned": false
00:09:40.433    }
00:09:40.433  ]
00:09:40.433   18:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=81538
00:09:40.433   18:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:09:40.433   18:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2
00:09:40.433  Running I/O for 10 seconds...
00:09:41.368                                                                                                  Latency(us)
00:09:41.368  
[2024-12-13T18:54:13.192Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:41.368  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:41.368  	 Nvme0n1             :       1.00    7289.00      28.47       0.00     0.00       0.00       0.00       0.00
00:09:41.368  
[2024-12-13T18:54:13.192Z]  ===================================================================================================================
00:09:41.368  
[2024-12-13T18:54:13.192Z]  Total                       :               7289.00      28.47       0.00     0.00       0.00       0.00       0.00
00:09:41.368  
00:09:42.302   18:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 077ac129-6e7f-42bd-a2b7-b589a4e87d1e
00:09:42.560  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:42.560  	 Nvme0n1             :       2.00    7275.00      28.42       0.00     0.00       0.00       0.00       0.00
00:09:42.560  
[2024-12-13T18:54:14.384Z]  ===================================================================================================================
00:09:42.560  
[2024-12-13T18:54:14.384Z]  Total                       :               7275.00      28.42       0.00     0.00       0.00       0.00       0.00
00:09:42.560  
00:09:42.818  true
00:09:42.818    18:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 077ac129-6e7f-42bd-a2b7-b589a4e87d1e
00:09:42.818    18:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters'
00:09:43.076   18:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99
00:09:43.076   18:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 ))
00:09:43.076   18:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 81538
00:09:43.642  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:43.642  	 Nvme0n1             :       3.00    7231.33      28.25       0.00     0.00       0.00       0.00       0.00
00:09:43.642  
[2024-12-13T18:54:15.466Z]  ===================================================================================================================
00:09:43.642  
[2024-12-13T18:54:15.466Z]  Total                       :               7231.33      28.25       0.00     0.00       0.00       0.00       0.00
00:09:43.642  
00:09:44.576  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:44.576  	 Nvme0n1             :       4.00    7190.75      28.09       0.00     0.00       0.00       0.00       0.00
00:09:44.576  
[2024-12-13T18:54:16.400Z]  ===================================================================================================================
00:09:44.576  
[2024-12-13T18:54:16.400Z]  Total                       :               7190.75      28.09       0.00     0.00       0.00       0.00       0.00
00:09:44.576  
00:09:45.511  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:45.511  	 Nvme0n1             :       5.00    7184.20      28.06       0.00     0.00       0.00       0.00       0.00
00:09:45.511  
[2024-12-13T18:54:17.335Z]  ===================================================================================================================
00:09:45.511  
[2024-12-13T18:54:17.335Z]  Total                       :               7184.20      28.06       0.00     0.00       0.00       0.00       0.00
00:09:45.511  
00:09:46.495  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:46.495  	 Nvme0n1             :       6.00    6967.67      27.22       0.00     0.00       0.00       0.00       0.00
00:09:46.495  
[2024-12-13T18:54:18.319Z]  ===================================================================================================================
00:09:46.495  
[2024-12-13T18:54:18.319Z]  Total                       :               6967.67      27.22       0.00     0.00       0.00       0.00       0.00
00:09:46.495  
00:09:47.430  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:47.430  	 Nvme0n1             :       7.00    6976.43      27.25       0.00     0.00       0.00       0.00       0.00
00:09:47.430  
[2024-12-13T18:54:19.254Z]  ===================================================================================================================
00:09:47.430  
[2024-12-13T18:54:19.254Z]  Total                       :               6976.43      27.25       0.00     0.00       0.00       0.00       0.00
00:09:47.430  
00:09:48.365  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:48.365  	 Nvme0n1             :       8.00    6975.38      27.25       0.00     0.00       0.00       0.00       0.00
00:09:48.365  
[2024-12-13T18:54:20.189Z]  ===================================================================================================================
00:09:48.365  
[2024-12-13T18:54:20.189Z]  Total                       :               6975.38      27.25       0.00     0.00       0.00       0.00       0.00
00:09:48.365  
00:09:49.740  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:49.740  	 Nvme0n1             :       9.00    6987.56      27.30       0.00     0.00       0.00       0.00       0.00
00:09:49.740  
[2024-12-13T18:54:21.564Z]  ===================================================================================================================
00:09:49.740  
[2024-12-13T18:54:21.564Z]  Total                       :               6987.56      27.30       0.00     0.00       0.00       0.00       0.00
00:09:49.740  
00:09:50.674  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:50.674  	 Nvme0n1             :      10.00    6993.90      27.32       0.00     0.00       0.00       0.00       0.00
00:09:50.674  
[2024-12-13T18:54:22.498Z]  ===================================================================================================================
00:09:50.674  
[2024-12-13T18:54:22.498Z]  Total                       :               6993.90      27.32       0.00     0.00       0.00       0.00       0.00
00:09:50.674  
00:09:50.674  
00:09:50.674                                                                                                  Latency(us)
00:09:50.674  
[2024-12-13T18:54:22.498Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:50.674  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:09:50.674  	 Nvme0n1             :      10.01    6996.34      27.33       0.00     0.00   18289.92    7804.74  187790.43
00:09:50.674  
[2024-12-13T18:54:22.498Z]  ===================================================================================================================
00:09:50.674  
[2024-12-13T18:54:22.498Z]  Total                       :               6996.34      27.33       0.00     0.00   18289.92    7804.74  187790.43
00:09:50.674  {
00:09:50.674    "results": [
00:09:50.674      {
00:09:50.674        "job": "Nvme0n1",
00:09:50.674        "core_mask": "0x2",
00:09:50.674        "workload": "randwrite",
00:09:50.674        "status": "finished",
00:09:50.674        "queue_depth": 128,
00:09:50.674        "io_size": 4096,
00:09:50.674        "runtime": 10.014813,
00:09:50.674        "iops": 6996.336326998817,
00:09:50.674        "mibps": 27.329438777339128,
00:09:50.674        "io_failed": 0,
00:09:50.674        "io_timeout": 0,
00:09:50.674        "avg_latency_us": 18289.918510308962,
00:09:50.674        "min_latency_us": 7804.741818181818,
00:09:50.674        "max_latency_us": 187790.42909090908
00:09:50.674      }
00:09:50.674    ],
00:09:50.674    "core_count": 1
00:09:50.674  }
00:09:50.674   18:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 81484
00:09:50.674   18:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 81484 ']'
00:09:50.674   18:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 81484
00:09:50.674    18:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname
00:09:50.674   18:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:50.674    18:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81484
00:09:50.675   18:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:09:50.675   18:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:09:50.675  killing process with pid 81484
00:09:50.675  Received shutdown signal, test time was about 10.000000 seconds
00:09:50.675  
00:09:50.675                                                                                                  Latency(us)
00:09:50.675  
[2024-12-13T18:54:22.499Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:50.675  
[2024-12-13T18:54:22.499Z]  ===================================================================================================================
00:09:50.675  
[2024-12-13T18:54:22.499Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:09:50.675   18:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81484'
00:09:50.675   18:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 81484
00:09:50.675   18:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 81484
00:09:50.675   18:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420
00:09:50.933   18:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:09:51.191    18:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 077ac129-6e7f-42bd-a2b7-b589a4e87d1e
00:09:51.191    18:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters'
00:09:51.449   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61
00:09:51.449   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]]
00:09:51.449   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 80904
00:09:51.449   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 80904
00:09:51.449  /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 80904 Killed                  "${NVMF_APP[@]}" "$@"
00:09:51.449   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true
00:09:51.449   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1
00:09:51.449   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:09:51.449   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:51.449   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:09:51.449   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=81701
00:09:51.449   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1
00:09:51.449   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 81701
00:09:51.449   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 81701 ']'
00:09:51.449   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:51.449   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:51.449  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:51.449   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:51.449   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:51.449   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:09:51.449  [2024-12-13 18:54:23.253065] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:09:51.449  [2024-12-13 18:54:23.253179] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:09:51.707  [2024-12-13 18:54:23.390603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:51.707  [2024-12-13 18:54:23.421099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:09:51.707  [2024-12-13 18:54:23.421169] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:09:51.707  [2024-12-13 18:54:23.421193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:09:51.707  [2024-12-13 18:54:23.421201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:09:51.707  [2024-12-13 18:54:23.421208] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:09:51.707  [2024-12-13 18:54:23.421580] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:09:51.966   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:51.966   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0
00:09:51.966   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:09:51.966   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:51.966   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:09:51.966   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:09:51.966    18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:09:52.224  [2024-12-13 18:54:23.876153] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore
00:09:52.224  [2024-12-13 18:54:23.876532] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0
00:09:52.224  [2024-12-13 18:54:23.876788] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1
00:09:52.224   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev
00:09:52.224   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 63db3081-ab5d-4b09-bb89-c6144fd21beb
00:09:52.224   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=63db3081-ab5d-4b09-bb89-c6144fd21beb
00:09:52.224   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:09:52.224   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i
00:09:52.224   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:09:52.224   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:09:52.224   18:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine
00:09:52.483   18:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 63db3081-ab5d-4b09-bb89-c6144fd21beb -t 2000
00:09:52.741  [
00:09:52.741    {
00:09:52.741      "aliases": [
00:09:52.741        "lvs/lvol"
00:09:52.741      ],
00:09:52.741      "assigned_rate_limits": {
00:09:52.741        "r_mbytes_per_sec": 0,
00:09:52.741        "rw_ios_per_sec": 0,
00:09:52.741        "rw_mbytes_per_sec": 0,
00:09:52.741        "w_mbytes_per_sec": 0
00:09:52.741      },
00:09:52.741      "block_size": 4096,
00:09:52.741      "claimed": false,
00:09:52.741      "driver_specific": {
00:09:52.741        "lvol": {
00:09:52.741          "base_bdev": "aio_bdev",
00:09:52.741          "clone": false,
00:09:52.741          "esnap_clone": false,
00:09:52.741          "lvol_store_uuid": "077ac129-6e7f-42bd-a2b7-b589a4e87d1e",
00:09:52.741          "num_allocated_clusters": 38,
00:09:52.741          "snapshot": false,
00:09:52.741          "thin_provision": false
00:09:52.741        }
00:09:52.741      },
00:09:52.741      "name": "63db3081-ab5d-4b09-bb89-c6144fd21beb",
00:09:52.741      "num_blocks": 38912,
00:09:52.741      "product_name": "Logical Volume",
00:09:52.741      "supported_io_types": {
00:09:52.741        "abort": false,
00:09:52.741        "compare": false,
00:09:52.741        "compare_and_write": false,
00:09:52.741        "copy": false,
00:09:52.741        "flush": false,
00:09:52.741        "get_zone_info": false,
00:09:52.741        "nvme_admin": false,
00:09:52.741        "nvme_io": false,
00:09:52.741        "nvme_io_md": false,
00:09:52.741        "nvme_iov_md": false,
00:09:52.741        "read": true,
00:09:52.741        "reset": true,
00:09:52.741        "seek_data": true,
00:09:52.741        "seek_hole": true,
00:09:52.741        "unmap": true,
00:09:52.741        "write": true,
00:09:52.741        "write_zeroes": true,
00:09:52.741        "zcopy": false,
00:09:52.741        "zone_append": false,
00:09:52.741        "zone_management": false
00:09:52.741      },
00:09:52.741      "uuid": "63db3081-ab5d-4b09-bb89-c6144fd21beb",
00:09:52.741      "zoned": false
00:09:52.741    }
00:09:52.741  ]
00:09:52.741   18:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0
00:09:52.741    18:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 077ac129-6e7f-42bd-a2b7-b589a4e87d1e
00:09:52.741    18:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters'
00:09:52.999   18:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 ))
00:09:52.999    18:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 077ac129-6e7f-42bd-a2b7-b589a4e87d1e
00:09:52.999    18:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters'
00:09:52.999   18:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 ))
00:09:52.999   18:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:09:53.257  [2024-12-13 18:54:25.061618] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs
00:09:53.514   18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 077ac129-6e7f-42bd-a2b7-b589a4e87d1e
00:09:53.514   18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0
00:09:53.514   18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 077ac129-6e7f-42bd-a2b7-b589a4e87d1e
00:09:53.514   18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:09:53.514   18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:53.514    18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:09:53.514   18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:53.514    18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:09:53.514   18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:53.514   18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:09:53.514   18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:09:53.514   18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 077ac129-6e7f-42bd-a2b7-b589a4e87d1e
00:09:53.514  2024/12/13 18:54:25 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:077ac129-6e7f-42bd-a2b7-b589a4e87d1e], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device
00:09:53.514  request:
00:09:53.514  {
00:09:53.514    "method": "bdev_lvol_get_lvstores",
00:09:53.514    "params": {
00:09:53.514      "uuid": "077ac129-6e7f-42bd-a2b7-b589a4e87d1e"
00:09:53.514    }
00:09:53.514  }
00:09:53.514  Got JSON-RPC error response
00:09:53.514  GoRPCClient: error on JSON-RPC call
00:09:53.772   18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1
00:09:53.772   18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:53.772   18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:09:53.772   18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:53.772   18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:09:53.772  aio_bdev
00:09:53.772   18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 63db3081-ab5d-4b09-bb89-c6144fd21beb
00:09:53.772   18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=63db3081-ab5d-4b09-bb89-c6144fd21beb
00:09:53.772   18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:09:53.772   18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i
00:09:53.772   18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:09:53.772   18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:09:53.772   18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine
00:09:54.030   18:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 63db3081-ab5d-4b09-bb89-c6144fd21beb -t 2000
00:09:54.288  [
00:09:54.288    {
00:09:54.288      "aliases": [
00:09:54.288        "lvs/lvol"
00:09:54.288      ],
00:09:54.288      "assigned_rate_limits": {
00:09:54.288        "r_mbytes_per_sec": 0,
00:09:54.288        "rw_ios_per_sec": 0,
00:09:54.288        "rw_mbytes_per_sec": 0,
00:09:54.288        "w_mbytes_per_sec": 0
00:09:54.288      },
00:09:54.288      "block_size": 4096,
00:09:54.288      "claimed": false,
00:09:54.288      "driver_specific": {
00:09:54.288        "lvol": {
00:09:54.288          "base_bdev": "aio_bdev",
00:09:54.288          "clone": false,
00:09:54.288          "esnap_clone": false,
00:09:54.288          "lvol_store_uuid": "077ac129-6e7f-42bd-a2b7-b589a4e87d1e",
00:09:54.288          "num_allocated_clusters": 38,
00:09:54.288          "snapshot": false,
00:09:54.288          "thin_provision": false
00:09:54.288        }
00:09:54.288      },
00:09:54.288      "name": "63db3081-ab5d-4b09-bb89-c6144fd21beb",
00:09:54.288      "num_blocks": 38912,
00:09:54.288      "product_name": "Logical Volume",
00:09:54.288      "supported_io_types": {
00:09:54.288        "abort": false,
00:09:54.288        "compare": false,
00:09:54.288        "compare_and_write": false,
00:09:54.288        "copy": false,
00:09:54.288        "flush": false,
00:09:54.288        "get_zone_info": false,
00:09:54.288        "nvme_admin": false,
00:09:54.288        "nvme_io": false,
00:09:54.288        "nvme_io_md": false,
00:09:54.288        "nvme_iov_md": false,
00:09:54.288        "read": true,
00:09:54.288        "reset": true,
00:09:54.288        "seek_data": true,
00:09:54.288        "seek_hole": true,
00:09:54.288        "unmap": true,
00:09:54.288        "write": true,
00:09:54.288        "write_zeroes": true,
00:09:54.288        "zcopy": false,
00:09:54.288        "zone_append": false,
00:09:54.288        "zone_management": false
00:09:54.288      },
00:09:54.288      "uuid": "63db3081-ab5d-4b09-bb89-c6144fd21beb",
00:09:54.288      "zoned": false
00:09:54.288    }
00:09:54.288  ]
00:09:54.288   18:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0
00:09:54.288    18:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 077ac129-6e7f-42bd-a2b7-b589a4e87d1e
00:09:54.288    18:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters'
00:09:54.546   18:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 ))
00:09:54.546    18:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 077ac129-6e7f-42bd-a2b7-b589a4e87d1e
00:09:54.546    18:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters'
00:09:54.805   18:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 ))
00:09:54.805   18:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 63db3081-ab5d-4b09-bb89-c6144fd21beb
00:09:55.063   18:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 077ac129-6e7f-42bd-a2b7-b589a4e87d1e
00:09:55.321   18:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:09:55.579   18:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:09:55.837  
00:09:55.837  real	0m19.692s
00:09:55.837  user	0m39.279s
00:09:55.837  sys	0m9.668s
00:09:55.837   18:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:55.837  ************************************
00:09:55.837  END TEST lvs_grow_dirty
00:09:55.837  ************************************
00:09:55.837   18:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:09:56.096   18:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0
00:09:56.096   18:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id
00:09:56.096   18:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0
00:09:56.096   18:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']'
00:09:56.096    18:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n'
00:09:56.096   18:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0
00:09:56.096   18:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]]
00:09:56.096   18:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files
00:09:56.096   18:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0
00:09:56.096  nvmf_trace.0
00:09:56.096   18:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0
00:09:56.096   18:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini
00:09:56.096   18:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup
00:09:56.096   18:54:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync
00:09:56.663   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:09:56.663   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e
00:09:56.663   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20}
00:09:56.663   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:09:56.663  rmmod nvme_tcp
00:09:56.663  rmmod nvme_fabrics
00:09:56.663  rmmod nvme_keyring
00:09:56.663   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:09:56.663   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e
00:09:56.663   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0
00:09:56.663   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 81701 ']'
00:09:56.663   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 81701
00:09:56.663   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 81701 ']'
00:09:56.663   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 81701
00:09:56.921    18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname
00:09:56.921   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:56.921    18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81701
00:09:56.921   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:56.921   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:56.921  killing process with pid 81701
00:09:56.921   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81701'
00:09:56.921   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 81701
00:09:56.921   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 81701
00:09:56.921   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:09:56.921   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:09:56.921   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:09:56.921   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr
00:09:56.921   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save
00:09:56.921   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:09:56.921   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore
00:09:56.921   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:09:56.921   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:09:56.921   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:09:56.921   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:09:56.922   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:09:57.180   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:09:57.180   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:09:57.180   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:09:57.180   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:09:57.180   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:09:57.180   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:09:57.180   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:09:57.180   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:09:57.180   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:09:57.180   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:09:57.180   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns
00:09:57.180   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:09:57.180   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:09:57.180    18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:09:57.180   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0
00:09:57.180  
00:09:57.180  real	0m40.161s
00:09:57.180  user	1m2.294s
00:09:57.180  sys	0m13.152s
00:09:57.180   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:57.180   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:09:57.180  ************************************
00:09:57.180  END TEST nvmf_lvs_grow
00:09:57.180  ************************************
00:09:57.180   18:54:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp
00:09:57.180   18:54:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:09:57.180   18:54:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:57.180   18:54:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:09:57.180  ************************************
00:09:57.180  START TEST nvmf_bdev_io_wait
00:09:57.180  ************************************
00:09:57.180   18:54:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp
00:09:57.440  * Looking for test storage...
00:09:57.440  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:09:57.440    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:57.440     18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version
00:09:57.440     18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:57.440    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:57.440    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:57.440    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:57.440    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:57.440    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-:
00:09:57.440    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1
00:09:57.440    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-:
00:09:57.440    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2
00:09:57.440    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<'
00:09:57.440    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2
00:09:57.440    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1
00:09:57.440    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:57.440    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in
00:09:57.440    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1
00:09:57.440    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:57.440    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:57.440     18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1
00:09:57.440     18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1
00:09:57.440     18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:57.440     18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1
00:09:57.440    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1
00:09:57.440     18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2
00:09:57.440     18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2
00:09:57.440     18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:57.440     18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2
00:09:57.440    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2
00:09:57.440    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:57.440    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:57.440    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:57.441  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:57.441  		--rc genhtml_branch_coverage=1
00:09:57.441  		--rc genhtml_function_coverage=1
00:09:57.441  		--rc genhtml_legend=1
00:09:57.441  		--rc geninfo_all_blocks=1
00:09:57.441  		--rc geninfo_unexecuted_blocks=1
00:09:57.441  		
00:09:57.441  		'
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:57.441  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:57.441  		--rc genhtml_branch_coverage=1
00:09:57.441  		--rc genhtml_function_coverage=1
00:09:57.441  		--rc genhtml_legend=1
00:09:57.441  		--rc geninfo_all_blocks=1
00:09:57.441  		--rc geninfo_unexecuted_blocks=1
00:09:57.441  		
00:09:57.441  		'
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:57.441  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:57.441  		--rc genhtml_branch_coverage=1
00:09:57.441  		--rc genhtml_function_coverage=1
00:09:57.441  		--rc genhtml_legend=1
00:09:57.441  		--rc geninfo_all_blocks=1
00:09:57.441  		--rc geninfo_unexecuted_blocks=1
00:09:57.441  		
00:09:57.441  		'
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:57.441  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:57.441  		--rc genhtml_branch_coverage=1
00:09:57.441  		--rc genhtml_function_coverage=1
00:09:57.441  		--rc genhtml_legend=1
00:09:57.441  		--rc geninfo_all_blocks=1
00:09:57.441  		--rc geninfo_unexecuted_blocks=1
00:09:57.441  		
00:09:57.441  		'
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:09:57.441     18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:09:57.441     18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:09:57.441     18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob
00:09:57.441     18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:09:57.441     18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:09:57.441     18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:09:57.441      18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:57.441      18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:57.441      18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:57.441      18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH
00:09:57.441      18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:09:57.441  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:09:57.441    18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:09:57.441   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:09:57.442   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:09:57.442   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:09:57.442   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:09:57.442   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:09:57.442   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:09:57.442   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:09:57.442   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:09:57.442   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:09:57.442   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:09:57.442   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:09:57.442  Cannot find device "nvmf_init_br"
00:09:57.442   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true
00:09:57.442   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:09:57.442  Cannot find device "nvmf_init_br2"
00:09:57.442   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true
00:09:57.442   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:09:57.442  Cannot find device "nvmf_tgt_br"
00:09:57.442   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true
00:09:57.442   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:09:57.442  Cannot find device "nvmf_tgt_br2"
00:09:57.442   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true
00:09:57.442   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:09:57.442  Cannot find device "nvmf_init_br"
00:09:57.442   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true
00:09:57.442   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:09:57.700  Cannot find device "nvmf_init_br2"
00:09:57.700   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true
00:09:57.700   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:09:57.700  Cannot find device "nvmf_tgt_br"
00:09:57.700   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:09:57.701  Cannot find device "nvmf_tgt_br2"
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:09:57.701  Cannot find device "nvmf_br"
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:09:57.701  Cannot find device "nvmf_init_if"
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:09:57.701  Cannot find device "nvmf_init_if2"
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:09:57.701  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:09:57.701  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:09:57.701  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:09:57.701  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms
00:09:57.701  
00:09:57.701  --- 10.0.0.3 ping statistics ---
00:09:57.701  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:09:57.701  rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:09:57.701  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:09:57.701  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms
00:09:57.701  
00:09:57.701  --- 10.0.0.4 ping statistics ---
00:09:57.701  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:09:57.701  rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms
00:09:57.701   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:09:57.960  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:09:57.960  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms
00:09:57.960  
00:09:57.960  --- 10.0.0.1 ping statistics ---
00:09:57.960  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:09:57.960  rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms
00:09:57.960   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:09:57.960  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:09:57.960  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms
00:09:57.960  
00:09:57.960  --- 10.0.0.2 ping statistics ---
00:09:57.960  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:09:57.960  rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms
00:09:57.960   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:09:57.960   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0
00:09:57.960   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:09:57.960   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:09:57.960   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:09:57.960   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:09:57.960   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:09:57.960   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:09:57.960   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:09:57.960   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc
00:09:57.960   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:09:57.960   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:57.960   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:09:57.960   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=82163
00:09:57.960   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc
00:09:57.960   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 82163
00:09:57.960   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 82163 ']'
00:09:57.960   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:57.960   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:57.960   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:57.960  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:57.960   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:57.960   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:09:57.960  [2024-12-13 18:54:29.630039] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:09:57.960  [2024-12-13 18:54:29.630689] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:09:57.960  [2024-12-13 18:54:29.778570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:09:58.219  [2024-12-13 18:54:29.814490] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:09:58.219  [2024-12-13 18:54:29.814568] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:09:58.219  [2024-12-13 18:54:29.814594] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:09:58.219  [2024-12-13 18:54:29.814602] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:09:58.219  [2024-12-13 18:54:29.814609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:09:58.219  [2024-12-13 18:54:29.815826] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:09:58.219  [2024-12-13 18:54:29.815943] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:09:58.219  [2024-12-13 18:54:29.816067] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:09:58.219  [2024-12-13 18:54:29.816067] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:09:58.219   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:58.219   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0
00:09:58.219   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:09:58.219   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:58.219   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:09:58.219   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:09:58.219   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1
00:09:58.219   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:58.219   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:09:58.219   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:58.219   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init
00:09:58.219   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:58.219   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:09:58.219   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:58.219   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:09:58.219   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:58.219   18:54:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:09:58.219  [2024-12-13 18:54:30.005714] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:09:58.219   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:58.219   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:09:58.219   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:58.219   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:09:58.219  Malloc0
00:09:58.219   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:58.219   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:09:58.219   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:58.219   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:09:58.478   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:58.478   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:09:58.478   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:58.478   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:09:58.478   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:58.478   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:09:58.478   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:58.478   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:09:58.478  [2024-12-13 18:54:30.062298] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:09:58.478   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:58.478   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=82202
00:09:58.478    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json
00:09:58.478   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256
00:09:58.478   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=82204
00:09:58.478    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:09:58.478    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:09:58.478    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:09:58.478    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:09:58.478  {
00:09:58.478    "params": {
00:09:58.478      "name": "Nvme$subsystem",
00:09:58.478      "trtype": "$TEST_TRANSPORT",
00:09:58.478      "traddr": "$NVMF_FIRST_TARGET_IP",
00:09:58.478      "adrfam": "ipv4",
00:09:58.478      "trsvcid": "$NVMF_PORT",
00:09:58.478      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:09:58.478      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:09:58.478      "hdgst": ${hdgst:-false},
00:09:58.478      "ddgst": ${ddgst:-false}
00:09:58.478    },
00:09:58.478    "method": "bdev_nvme_attach_controller"
00:09:58.478  }
00:09:58.478  EOF
00:09:58.478  )")
00:09:58.478    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json
00:09:58.479   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256
00:09:58.479    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:09:58.479   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=82206
00:09:58.479    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:09:58.479    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:09:58.479    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:09:58.479  {
00:09:58.479    "params": {
00:09:58.479      "name": "Nvme$subsystem",
00:09:58.479      "trtype": "$TEST_TRANSPORT",
00:09:58.479      "traddr": "$NVMF_FIRST_TARGET_IP",
00:09:58.479      "adrfam": "ipv4",
00:09:58.479      "trsvcid": "$NVMF_PORT",
00:09:58.479      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:09:58.479      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:09:58.479      "hdgst": ${hdgst:-false},
00:09:58.479      "ddgst": ${ddgst:-false}
00:09:58.479    },
00:09:58.479    "method": "bdev_nvme_attach_controller"
00:09:58.479  }
00:09:58.479  EOF
00:09:58.479  )")
00:09:58.479   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256
00:09:58.479     18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:09:58.479   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=82209
00:09:58.479   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync
00:09:58.479     18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:09:58.479    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json
00:09:58.479    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:09:58.479    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:09:58.479    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:09:58.479   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256
00:09:58.479    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:09:58.479  {
00:09:58.479    "params": {
00:09:58.479      "name": "Nvme$subsystem",
00:09:58.479      "trtype": "$TEST_TRANSPORT",
00:09:58.479      "traddr": "$NVMF_FIRST_TARGET_IP",
00:09:58.479      "adrfam": "ipv4",
00:09:58.479      "trsvcid": "$NVMF_PORT",
00:09:58.479      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:09:58.479      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:09:58.479      "hdgst": ${hdgst:-false},
00:09:58.479      "ddgst": ${ddgst:-false}
00:09:58.479    },
00:09:58.479    "method": "bdev_nvme_attach_controller"
00:09:58.479  }
00:09:58.479  EOF
00:09:58.479  )")
00:09:58.479    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:09:58.479     18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:09:58.479    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:09:58.479     18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:09:58.479     18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:09:58.479    "params": {
00:09:58.479      "name": "Nvme1",
00:09:58.479      "trtype": "tcp",
00:09:58.479      "traddr": "10.0.0.3",
00:09:58.479      "adrfam": "ipv4",
00:09:58.479      "trsvcid": "4420",
00:09:58.479      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:09:58.479      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:09:58.479      "hdgst": false,
00:09:58.479      "ddgst": false
00:09:58.479    },
00:09:58.479    "method": "bdev_nvme_attach_controller"
00:09:58.479  }'
00:09:58.479     18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:09:58.479     18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:09:58.479    "params": {
00:09:58.479      "name": "Nvme1",
00:09:58.479      "trtype": "tcp",
00:09:58.479      "traddr": "10.0.0.3",
00:09:58.479      "adrfam": "ipv4",
00:09:58.479      "trsvcid": "4420",
00:09:58.479      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:09:58.479      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:09:58.479      "hdgst": false,
00:09:58.479      "ddgst": false
00:09:58.479    },
00:09:58.479    "method": "bdev_nvme_attach_controller"
00:09:58.479  }'
00:09:58.479    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json
00:09:58.479    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:09:58.479    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:09:58.479    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:09:58.479    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:09:58.479  {
00:09:58.479    "params": {
00:09:58.479      "name": "Nvme$subsystem",
00:09:58.479      "trtype": "$TEST_TRANSPORT",
00:09:58.479      "traddr": "$NVMF_FIRST_TARGET_IP",
00:09:58.479      "adrfam": "ipv4",
00:09:58.479      "trsvcid": "$NVMF_PORT",
00:09:58.479      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:09:58.479      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:09:58.479      "hdgst": ${hdgst:-false},
00:09:58.479      "ddgst": ${ddgst:-false}
00:09:58.479    },
00:09:58.479    "method": "bdev_nvme_attach_controller"
00:09:58.479  }
00:09:58.479  EOF
00:09:58.479  )")
00:09:58.479    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:09:58.479     18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:09:58.479     18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:09:58.479    "params": {
00:09:58.479      "name": "Nvme1",
00:09:58.479      "trtype": "tcp",
00:09:58.479      "traddr": "10.0.0.3",
00:09:58.479      "adrfam": "ipv4",
00:09:58.479      "trsvcid": "4420",
00:09:58.479      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:09:58.479      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:09:58.479      "hdgst": false,
00:09:58.479      "ddgst": false
00:09:58.479    },
00:09:58.479    "method": "bdev_nvme_attach_controller"
00:09:58.479  }'
00:09:58.479     18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:09:58.479    18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:09:58.479     18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:09:58.479     18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:09:58.479    "params": {
00:09:58.479      "name": "Nvme1",
00:09:58.479      "trtype": "tcp",
00:09:58.479      "traddr": "10.0.0.3",
00:09:58.479      "adrfam": "ipv4",
00:09:58.479      "trsvcid": "4420",
00:09:58.479      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:09:58.479      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:09:58.479      "hdgst": false,
00:09:58.479      "ddgst": false
00:09:58.479    },
00:09:58.479    "method": "bdev_nvme_attach_controller"
00:09:58.479  }'
00:09:58.479  [2024-12-13 18:54:30.126653] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:09:58.479  [2024-12-13 18:54:30.126725] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ]
00:09:58.479   18:54:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 82202
00:09:58.479  [2024-12-13 18:54:30.137944] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:09:58.479  [2024-12-13 18:54:30.138025] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ]
00:09:58.479  [2024-12-13 18:54:30.154855] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:09:58.479  [2024-12-13 18:54:30.154943] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ]
00:09:58.479  [2024-12-13 18:54:30.169346] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:09:58.479  [2024-12-13 18:54:30.169648] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ]
00:09:58.738  [2024-12-13 18:54:30.340543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:58.738  [2024-12-13 18:54:30.376365] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6
00:09:58.738  [2024-12-13 18:54:30.422248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:58.738  [2024-12-13 18:54:30.462693] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4
00:09:58.739  [2024-12-13 18:54:30.496770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:58.739  [2024-12-13 18:54:30.534344] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5
00:09:58.997  Running I/O for 1 seconds...
00:09:58.997  [2024-12-13 18:54:30.575709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:58.997  Running I/O for 1 seconds...
00:09:58.997  [2024-12-13 18:54:30.613000] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7
00:09:58.997  Running I/O for 1 seconds...
00:09:58.997  Running I/O for 1 seconds...
00:09:59.933     194952.00 IOPS,   761.53 MiB/s
00:09:59.933                                                                                                  Latency(us)
00:09:59.933  
[2024-12-13T18:54:31.757Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:59.933  Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096)
00:09:59.933  	 Nvme1n1             :       1.00  194598.24     760.15       0.00     0.00     654.51     277.41    1809.69
00:09:59.933  
[2024-12-13T18:54:31.757Z]  ===================================================================================================================
00:09:59.933  
[2024-12-13T18:54:31.757Z]  Total                       :             194598.24     760.15       0.00     0.00     654.51     277.41    1809.69
00:09:59.933      10378.00 IOPS,    40.54 MiB/s
00:09:59.933                                                                                                  Latency(us)
00:09:59.933  
[2024-12-13T18:54:31.757Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:59.933  Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096)
00:09:59.933  	 Nvme1n1             :       1.01   10431.38      40.75       0.00     0.00   12219.71    6523.81   18350.08
00:09:59.933  
[2024-12-13T18:54:31.757Z]  ===================================================================================================================
00:09:59.933  
[2024-12-13T18:54:31.757Z]  Total                       :              10431.38      40.75       0.00     0.00   12219.71    6523.81   18350.08
00:09:59.933       7721.00 IOPS,    30.16 MiB/s
00:09:59.933                                                                                                  Latency(us)
00:09:59.933  
[2024-12-13T18:54:31.757Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:59.933  Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096)
00:09:59.933  	 Nvme1n1             :       1.01    7771.74      30.36       0.00     0.00   16379.20    8817.57   28955.00
00:09:59.933  
[2024-12-13T18:54:31.757Z]  ===================================================================================================================
00:09:59.933  
[2024-12-13T18:54:31.757Z]  Total                       :               7771.74      30.36       0.00     0.00   16379.20    8817.57   28955.00
00:10:00.193       8954.00 IOPS,    34.98 MiB/s
[2024-12-13T18:54:32.017Z]  18:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 82204
00:10:00.193  
00:10:00.193                                                                                                  Latency(us)
00:10:00.193  
[2024-12-13T18:54:32.017Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:10:00.193  Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096)
00:10:00.193  	 Nvme1n1             :       1.01    9037.45      35.30       0.00     0.00   14108.47    3813.00   21805.61
00:10:00.193  
[2024-12-13T18:54:32.017Z]  ===================================================================================================================
00:10:00.193  
[2024-12-13T18:54:32.017Z]  Total                       :               9037.45      35.30       0.00     0.00   14108.47    3813.00   21805.61
00:10:00.193   18:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 82206
00:10:00.193   18:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 82209
00:10:00.193   18:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:10:00.193   18:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:00.193   18:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:10:00.193   18:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:00.193   18:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT
00:10:00.193   18:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini
00:10:00.193   18:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup
00:10:00.193   18:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync
00:10:00.193   18:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:10:00.193   18:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e
00:10:00.193   18:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20}
00:10:00.193   18:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:10:00.193  rmmod nvme_tcp
00:10:00.193  rmmod nvme_fabrics
00:10:00.193  rmmod nvme_keyring
00:10:00.193   18:54:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:10:00.193   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e
00:10:00.193   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0
00:10:00.193   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 82163 ']'
00:10:00.193   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 82163
00:10:00.193   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 82163 ']'
00:10:00.193   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 82163
00:10:00.193    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname
00:10:00.193   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:00.193    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82163
00:10:00.452  killing process with pid 82163
00:10:00.452   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:00.452   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:00.452   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82163'
00:10:00.452   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 82163
00:10:00.452   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 82163
00:10:00.452   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:10:00.452   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:10:00.452   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:10:00.452   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr
00:10:00.452   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save
00:10:00.452   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:10:00.452   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore
00:10:00.452   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:10:00.452   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:10:00.452   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:10:00.452   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:10:00.452   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:10:00.452   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:10:00.711   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:10:00.711   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:10:00.711   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:10:00.711   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:10:00.711   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:10:00.711   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:10:00.711   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:10:00.711   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:10:00.711   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:10:00.711   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns
00:10:00.711   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:00.711   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:00.711    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:00.711   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0
00:10:00.711  
00:10:00.711  real	0m3.478s
00:10:00.711  user	0m13.946s
00:10:00.711  sys	0m2.171s
00:10:00.711   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:00.711  ************************************
00:10:00.711   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:10:00.711  END TEST nvmf_bdev_io_wait
00:10:00.711  ************************************
00:10:00.711   18:54:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp
00:10:00.711   18:54:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:10:00.711   18:54:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:00.711   18:54:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:10:00.711  ************************************
00:10:00.711  START TEST nvmf_queue_depth
00:10:00.711  ************************************
00:10:00.711   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp
00:10:00.971  * Looking for test storage...
00:10:00.971  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:10:00.971    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:00.971     18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:00.971     18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-:
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-:
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<'
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:00.972     18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1
00:10:00.972     18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1
00:10:00.972     18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:00.972     18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1
00:10:00.972     18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2
00:10:00.972     18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2
00:10:00.972     18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:00.972     18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:00.972  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:00.972  		--rc genhtml_branch_coverage=1
00:10:00.972  		--rc genhtml_function_coverage=1
00:10:00.972  		--rc genhtml_legend=1
00:10:00.972  		--rc geninfo_all_blocks=1
00:10:00.972  		--rc geninfo_unexecuted_blocks=1
00:10:00.972  		
00:10:00.972  		'
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:00.972  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:00.972  		--rc genhtml_branch_coverage=1
00:10:00.972  		--rc genhtml_function_coverage=1
00:10:00.972  		--rc genhtml_legend=1
00:10:00.972  		--rc geninfo_all_blocks=1
00:10:00.972  		--rc geninfo_unexecuted_blocks=1
00:10:00.972  		
00:10:00.972  		'
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:00.972  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:00.972  		--rc genhtml_branch_coverage=1
00:10:00.972  		--rc genhtml_function_coverage=1
00:10:00.972  		--rc genhtml_legend=1
00:10:00.972  		--rc geninfo_all_blocks=1
00:10:00.972  		--rc geninfo_unexecuted_blocks=1
00:10:00.972  		
00:10:00.972  		'
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:00.972  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:00.972  		--rc genhtml_branch_coverage=1
00:10:00.972  		--rc genhtml_function_coverage=1
00:10:00.972  		--rc genhtml_legend=1
00:10:00.972  		--rc geninfo_all_blocks=1
00:10:00.972  		--rc geninfo_unexecuted_blocks=1
00:10:00.972  		
00:10:00.972  		'
00:10:00.972   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:10:00.972     18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:10:00.972     18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:10:00.972     18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob
00:10:00.972     18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:10:00.972     18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:10:00.972     18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:10:00.972      18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:00.972      18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:00.972      18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:00.972      18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH
00:10:00.972      18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:10:00.972    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:10:00.973    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:10:00.973  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:10:00.973    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:10:00.973    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:10:00.973    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:00.973    18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:10:00.973  Cannot find device "nvmf_init_br"
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:10:00.973  Cannot find device "nvmf_init_br2"
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:10:00.973  Cannot find device "nvmf_tgt_br"
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:10:00.973  Cannot find device "nvmf_tgt_br2"
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:10:00.973  Cannot find device "nvmf_init_br"
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:10:00.973  Cannot find device "nvmf_init_br2"
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true
00:10:00.973   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:10:01.232  Cannot find device "nvmf_tgt_br"
00:10:01.232   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true
00:10:01.232   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:10:01.232  Cannot find device "nvmf_tgt_br2"
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:10:01.233  Cannot find device "nvmf_br"
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:10:01.233  Cannot find device "nvmf_init_if"
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:10:01.233  Cannot find device "nvmf_init_if2"
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:10:01.233  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:10:01.233  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:10:01.233   18:54:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:10:01.233   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:10:01.233   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:10:01.233   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:10:01.233   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:10:01.233   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:10:01.233   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:10:01.233   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:10:01.233   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:10:01.492   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:10:01.492  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:10:01.492  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms
00:10:01.492  
00:10:01.492  --- 10.0.0.3 ping statistics ---
00:10:01.492  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:01.492  rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms
00:10:01.492   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:10:01.492  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:10:01.492  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms
00:10:01.492  
00:10:01.492  --- 10.0.0.4 ping statistics ---
00:10:01.492  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:01.492  rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms
00:10:01.492   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:10:01.492  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:10:01.492  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms
00:10:01.492  
00:10:01.492  --- 10.0.0.1 ping statistics ---
00:10:01.492  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:01.492  rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms
00:10:01.492   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:10:01.492  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:10:01.492  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms
00:10:01.492  
00:10:01.492  --- 10.0.0.2 ping statistics ---
00:10:01.492  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:01.492  rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms
00:10:01.492   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:10:01.492   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0
00:10:01.492   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:10:01.492   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:10:01.492   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:10:01.492   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:10:01.492   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:10:01.492   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:10:01.492   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:10:01.492   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2
00:10:01.492   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:10:01.493   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:01.493   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:10:01.493   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=82467
00:10:01.493   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:10:01.493   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 82467
00:10:01.493   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 82467 ']'
00:10:01.493   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:01.493   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:01.493   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:01.493  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:01.493   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:01.493   18:54:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:10:01.493  [2024-12-13 18:54:33.166307] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:10:01.493  [2024-12-13 18:54:33.166421] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:10:01.752  [2024-12-13 18:54:33.323416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:01.752  [2024-12-13 18:54:33.361184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:10:01.752  [2024-12-13 18:54:33.361263] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:10:01.752  [2024-12-13 18:54:33.361278] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:10:01.752  [2024-12-13 18:54:33.361289] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:10:01.752  [2024-12-13 18:54:33.361298] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:10:01.752  [2024-12-13 18:54:33.361717] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:10:02.378   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:02.378   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0
00:10:02.378   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:10:02.378   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:02.378   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:10:02.378   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:10:02.378   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:10:02.378   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:02.378   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:10:02.378  [2024-12-13 18:54:34.117814] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:10:02.378   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:02.378   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:10:02.378   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:02.378   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:10:02.378  Malloc0
00:10:02.378   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:02.378   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:10:02.379   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:02.379   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:10:02.661   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:02.661   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:10:02.661   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:02.661   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:10:02.661   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:02.661   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:10:02.661   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:02.661   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:10:02.661  [2024-12-13 18:54:34.177472] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:10:02.661   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:02.661   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=82517
00:10:02.661  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:10:02.661   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10
00:10:02.661   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:10:02.661   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 82517 /var/tmp/bdevperf.sock
00:10:02.661   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 82517 ']'
00:10:02.661   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:10:02.661   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:02.661   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:10:02.661   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:02.661   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:10:02.661  [2024-12-13 18:54:34.246758] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:10:02.661  [2024-12-13 18:54:34.246867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82517 ]
00:10:02.661  [2024-12-13 18:54:34.399675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:02.661  [2024-12-13 18:54:34.440543] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:10:02.920   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:02.920   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0
00:10:02.920   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:10:02.920   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:02.920   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:10:02.920  NVMe0n1
00:10:02.920   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:02.920   18:54:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:10:03.178  Running I/O for 10 seconds...
00:10:05.050       9226.00 IOPS,    36.04 MiB/s
[2024-12-13T18:54:37.808Z]      9780.50 IOPS,    38.21 MiB/s
[2024-12-13T18:54:39.190Z]     10172.00 IOPS,    39.73 MiB/s
[2024-12-13T18:54:40.126Z]     10249.50 IOPS,    40.04 MiB/s
[2024-12-13T18:54:41.059Z]     10439.80 IOPS,    40.78 MiB/s
[2024-12-13T18:54:41.993Z]     10556.50 IOPS,    41.24 MiB/s
[2024-12-13T18:54:42.927Z]     10605.29 IOPS,    41.43 MiB/s
[2024-12-13T18:54:43.861Z]     10672.00 IOPS,    41.69 MiB/s
[2024-12-13T18:54:44.796Z]     10725.00 IOPS,    41.89 MiB/s
[2024-12-13T18:54:45.054Z]     10772.60 IOPS,    42.08 MiB/s
00:10:13.230                                                                                                  Latency(us)
00:10:13.230  
[2024-12-13T18:54:45.054Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:10:13.230  Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096)
00:10:13.230  	 Verification LBA range: start 0x0 length 0x4000
00:10:13.230  	 NVMe0n1             :      10.07   10793.06      42.16       0.00     0.00   94466.21   17396.83   71493.82
00:10:13.230  
[2024-12-13T18:54:45.054Z]  ===================================================================================================================
00:10:13.230  
[2024-12-13T18:54:45.054Z]  Total                       :              10793.06      42.16       0.00     0.00   94466.21   17396.83   71493.82
00:10:13.230  {
00:10:13.230    "results": [
00:10:13.230      {
00:10:13.230        "job": "NVMe0n1",
00:10:13.230        "core_mask": "0x1",
00:10:13.230        "workload": "verify",
00:10:13.230        "status": "finished",
00:10:13.230        "verify_range": {
00:10:13.230          "start": 0,
00:10:13.231          "length": 16384
00:10:13.231        },
00:10:13.231        "queue_depth": 1024,
00:10:13.231        "io_size": 4096,
00:10:13.231        "runtime": 10.067674,
00:10:13.231        "iops": 10793.059052170342,
00:10:13.231        "mibps": 42.1603869225404,
00:10:13.231        "io_failed": 0,
00:10:13.231        "io_timeout": 0,
00:10:13.231        "avg_latency_us": 94466.21456831129,
00:10:13.231        "min_latency_us": 17396.82909090909,
00:10:13.231        "max_latency_us": 71493.81818181818
00:10:13.231      }
00:10:13.231    ],
00:10:13.231    "core_count": 1
00:10:13.231  }
00:10:13.231   18:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 82517
00:10:13.231   18:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 82517 ']'
00:10:13.231   18:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 82517
00:10:13.231    18:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname
00:10:13.231   18:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:13.231    18:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82517
00:10:13.231   18:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:13.231   18:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:13.231  killing process with pid 82517
00:10:13.231   18:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82517'
00:10:13.231   18:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 82517
00:10:13.231  Received shutdown signal, test time was about 10.000000 seconds
00:10:13.231  
00:10:13.231                                                                                                  Latency(us)
00:10:13.231  
[2024-12-13T18:54:45.055Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:10:13.231  
[2024-12-13T18:54:45.055Z]  ===================================================================================================================
00:10:13.231  
[2024-12-13T18:54:45.055Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:10:13.231   18:54:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 82517
00:10:13.497   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT
00:10:13.497   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini
00:10:13.497   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup
00:10:13.497   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync
00:10:13.497   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:10:13.497   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e
00:10:13.497   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20}
00:10:13.497   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:10:13.497  rmmod nvme_tcp
00:10:13.497  rmmod nvme_fabrics
00:10:13.497  rmmod nvme_keyring
00:10:13.497   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:10:13.497   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e
00:10:13.497   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0
00:10:13.497   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 82467 ']'
00:10:13.497   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 82467
00:10:13.497   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 82467 ']'
00:10:13.497   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 82467
00:10:13.497    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname
00:10:13.497   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:13.497    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82467
00:10:13.497   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:10:13.497   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:10:13.497   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82467'
00:10:13.497  killing process with pid 82467
00:10:13.497   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 82467
00:10:13.497   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 82467
00:10:13.757   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:10:13.757   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:10:13.757   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:10:13.757   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr
00:10:13.757   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save
00:10:13.757   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:10:13.757   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore
00:10:13.757   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:10:13.757   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:10:13.757   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:10:13.757   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:10:13.757   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:10:13.757   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:10:13.757   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:10:13.757   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:10:13.757   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:10:13.757   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:10:13.757   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:10:13.757   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:10:13.757   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:10:14.015   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:10:14.015   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:10:14.015   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns
00:10:14.015   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:14.015   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:14.015    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:14.015   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0
00:10:14.015  
00:10:14.015  real	0m13.151s
00:10:14.015  user	0m22.017s
00:10:14.015  sys	0m2.083s
00:10:14.015   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:14.015   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:10:14.015  ************************************
00:10:14.015  END TEST nvmf_queue_depth
00:10:14.015  ************************************
00:10:14.015   18:54:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp
00:10:14.015   18:54:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:10:14.015   18:54:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:14.015   18:54:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:10:14.015  ************************************
00:10:14.015  START TEST nvmf_target_multipath
00:10:14.015  ************************************
00:10:14.015   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp
00:10:14.015  * Looking for test storage...
00:10:14.015  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:10:14.015    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:14.015     18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version
00:10:14.015     18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:14.275    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:14.275    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:14.275    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:14.275    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:14.275    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-:
00:10:14.275    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1
00:10:14.275    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-:
00:10:14.275    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2
00:10:14.275    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<'
00:10:14.275    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2
00:10:14.275    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1
00:10:14.275    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:14.276     18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1
00:10:14.276     18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1
00:10:14.276     18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:14.276     18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1
00:10:14.276     18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2
00:10:14.276     18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2
00:10:14.276     18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:14.276     18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:14.276  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:14.276  		--rc genhtml_branch_coverage=1
00:10:14.276  		--rc genhtml_function_coverage=1
00:10:14.276  		--rc genhtml_legend=1
00:10:14.276  		--rc geninfo_all_blocks=1
00:10:14.276  		--rc geninfo_unexecuted_blocks=1
00:10:14.276  		
00:10:14.276  		'
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:14.276  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:14.276  		--rc genhtml_branch_coverage=1
00:10:14.276  		--rc genhtml_function_coverage=1
00:10:14.276  		--rc genhtml_legend=1
00:10:14.276  		--rc geninfo_all_blocks=1
00:10:14.276  		--rc geninfo_unexecuted_blocks=1
00:10:14.276  		
00:10:14.276  		'
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:14.276  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:14.276  		--rc genhtml_branch_coverage=1
00:10:14.276  		--rc genhtml_function_coverage=1
00:10:14.276  		--rc genhtml_legend=1
00:10:14.276  		--rc geninfo_all_blocks=1
00:10:14.276  		--rc geninfo_unexecuted_blocks=1
00:10:14.276  		
00:10:14.276  		'
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:14.276  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:14.276  		--rc genhtml_branch_coverage=1
00:10:14.276  		--rc genhtml_function_coverage=1
00:10:14.276  		--rc genhtml_legend=1
00:10:14.276  		--rc geninfo_all_blocks=1
00:10:14.276  		--rc geninfo_unexecuted_blocks=1
00:10:14.276  		
00:10:14.276  		'
00:10:14.276   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:10:14.276     18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:10:14.276     18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:10:14.276     18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob
00:10:14.276     18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:10:14.276     18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:10:14.276     18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:10:14.276      18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:14.276      18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:14.276      18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:14.276      18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH
00:10:14.276      18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:10:14.276  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0
00:10:14.276   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64
00:10:14.276   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:10:14.276   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1
00:10:14.276   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:10:14.276   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit
00:10:14.276   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:10:14.276   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:10:14.276   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs
00:10:14.276   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no
00:10:14.276   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns
00:10:14.276   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:14.276   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:14.276    18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:14.276   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:10:14.276   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:10:14.276   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:10:14.277  Cannot find device "nvmf_init_br"
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:10:14.277  Cannot find device "nvmf_init_br2"
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:10:14.277  Cannot find device "nvmf_tgt_br"
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:10:14.277  Cannot find device "nvmf_tgt_br2"
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:10:14.277  Cannot find device "nvmf_init_br"
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:10:14.277  Cannot find device "nvmf_init_br2"
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:10:14.277  Cannot find device "nvmf_tgt_br"
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:10:14.277  Cannot find device "nvmf_tgt_br2"
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true
00:10:14.277   18:54:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:10:14.277  Cannot find device "nvmf_br"
00:10:14.277   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true
00:10:14.277   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:10:14.277  Cannot find device "nvmf_init_if"
00:10:14.277   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true
00:10:14.277   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:10:14.277  Cannot find device "nvmf_init_if2"
00:10:14.277   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true
00:10:14.277   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:10:14.277  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:10:14.277   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true
00:10:14.277   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:10:14.277  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:10:14.277   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true
00:10:14.277   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:10:14.277   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:10:14.277   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:10:14.277   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:10:14.277   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:10:14.277   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:10:14.277   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:10:14.536  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:10:14.536  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms
00:10:14.536  
00:10:14.536  --- 10.0.0.3 ping statistics ---
00:10:14.536  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:14.536  rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:10:14.536  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:10:14.536  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms
00:10:14.536  
00:10:14.536  --- 10.0.0.4 ping statistics ---
00:10:14.536  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:14.536  rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:10:14.536  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:10:14.536  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms
00:10:14.536  
00:10:14.536  --- 10.0.0.1 ping statistics ---
00:10:14.536  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:14.536  rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms
00:10:14.536   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:10:14.536  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:10:14.536  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms
00:10:14.536  
00:10:14.536  --- 10.0.0.2 ping statistics ---
00:10:14.537  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:14.537  rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']'
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']'
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=82893
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 82893
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 82893 ']'
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:14.537  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:14.537   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x
00:10:14.795  [2024-12-13 18:54:46.388904] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:10:14.795  [2024-12-13 18:54:46.389002] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:10:14.795  [2024-12-13 18:54:46.537758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:10:14.795  [2024-12-13 18:54:46.575352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:10:14.795  [2024-12-13 18:54:46.575423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:10:14.795  [2024-12-13 18:54:46.575434] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:10:14.795  [2024-12-13 18:54:46.575441] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:10:14.795  [2024-12-13 18:54:46.575448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:10:14.795  [2024-12-13 18:54:46.576700] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:10:14.795  [2024-12-13 18:54:46.577201] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:10:14.796  [2024-12-13 18:54:46.577462] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:10:14.796  [2024-12-13 18:54:46.577473] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:10:15.054   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:15.054   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0
00:10:15.054   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:10:15.054   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:15.054   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x
00:10:15.054   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:10:15.054   18:54:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:10:15.312  [2024-12-13 18:54:47.046814] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:10:15.312   18:54:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0
00:10:15.569  Malloc0
00:10:15.569   18:54:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r
00:10:15.827   18:54:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:10:16.085   18:54:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:10:16.343  [2024-12-13 18:54:48.127977] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:10:16.343   18:54:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420
00:10:16.601  [2024-12-13 18:54:48.388244] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 ***
00:10:16.601   18:54:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G
00:10:16.859   18:54:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G
00:10:17.117   18:54:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME
00:10:17.117   18:54:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0
00:10:17.117   18:54:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:10:17.117   18:54:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:10:17.117   18:54:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2
00:10:19.023   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:10:19.297    18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:10:19.297    18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:10:19.297   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:10:19.297   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:10:19.297   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0
00:10:19.297    18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME
00:10:19.297    18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s
00:10:19.297    18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/*
00:10:19.297    18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]]
00:10:19.297    18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]]
00:10:19.297    18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0
00:10:19.297    18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0
00:10:19.297   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0
00:10:19.297   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*)
00:10:19.297   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}")
00:10:19.297   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 ))
00:10:19.297   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1
00:10:19.297   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1
00:10:19.297   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized
00:10:19.297   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized
00:10:19.297   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:10:19.297   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state
00:10:19.298   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]]
00:10:19.298   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]]
00:10:19.298   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized
00:10:19.298   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized
00:10:19.298   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:10:19.298   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state
00:10:19.298   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:10:19.298   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]]
00:10:19.298   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa
00:10:19.298   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=83023
00:10:19.298   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1
00:10:19.298   18:54:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v
00:10:19.298  [global]
00:10:19.298  thread=1
00:10:19.298  invalidate=1
00:10:19.298  rw=randrw
00:10:19.298  time_based=1
00:10:19.298  runtime=6
00:10:19.298  ioengine=libaio
00:10:19.298  direct=1
00:10:19.298  bs=4096
00:10:19.298  iodepth=128
00:10:19.298  norandommap=0
00:10:19.298  numjobs=1
00:10:19.298  
00:10:19.298  verify_dump=1
00:10:19.298  verify_backlog=512
00:10:19.298  verify_state_save=0
00:10:19.298  do_verify=1
00:10:19.298  verify=crc32c-intel
00:10:19.298  [job0]
00:10:19.298  filename=/dev/nvme0n1
00:10:19.298  Could not set queue depth (nvme0n1)
00:10:19.298  job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:10:19.298  fio-3.35
00:10:19.298  Starting 1 thread
00:10:20.234   18:54:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible
00:10:20.493   18:54:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized
00:10:20.752   18:54:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible
00:10:20.752   18:54:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible
00:10:20.752   18:54:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:10:20.752   18:54:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state
00:10:20.752   18:54:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]]
00:10:20.752   18:54:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]]
00:10:20.752   18:54:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized
00:10:20.752   18:54:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized
00:10:20.752   18:54:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:10:20.752   18:54:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state
00:10:20.752   18:54:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:10:20.752   18:54:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]]
00:10:20.752   18:54:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s
00:10:21.686   18:54:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 ))
00:10:21.686   18:54:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:10:21.686   18:54:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]]
00:10:21.686   18:54:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized
00:10:22.253   18:54:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible
00:10:22.512   18:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized
00:10:22.512   18:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized
00:10:22.512   18:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:10:22.512   18:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state
00:10:22.512   18:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]]
00:10:22.512   18:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]]
00:10:22.512   18:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible
00:10:22.512   18:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible
00:10:22.512   18:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:10:22.512   18:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state
00:10:22.512   18:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:10:22.512   18:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]]
00:10:22.512   18:54:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s
00:10:23.453   18:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 ))
00:10:23.453   18:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:10:23.453   18:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]]
00:10:23.453   18:54:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 83023
00:10:25.989  
00:10:25.989  job0: (groupid=0, jobs=1): err= 0: pid=83044: Fri Dec 13 18:54:57 2024
00:10:25.989    read: IOPS=10.6k, BW=41.4MiB/s (43.4MB/s)(249MiB/6006msec)
00:10:25.989      slat (usec): min=2, max=6547, avg=56.04, stdev=253.49
00:10:25.989      clat (usec): min=1398, max=18933, avg=8288.70, stdev=1320.03
00:10:25.989       lat (usec): min=2455, max=18978, avg=8344.74, stdev=1334.86
00:10:25.989      clat percentiles (usec):
00:10:25.989       |  1.00th=[ 4948],  5.00th=[ 6456], 10.00th=[ 7046], 20.00th=[ 7439],
00:10:25.989       | 30.00th=[ 7635], 40.00th=[ 7832], 50.00th=[ 8094], 60.00th=[ 8455],
00:10:25.989       | 70.00th=[ 8717], 80.00th=[ 9241], 90.00th=[ 9896], 95.00th=[10552],
00:10:25.989       | 99.00th=[12256], 99.50th=[13173], 99.90th=[14877], 99.95th=[15795],
00:10:25.989       | 99.99th=[15926]
00:10:25.989     bw (  KiB/s): min=   48, max=29160, per=51.55%, avg=21866.18, stdev=8691.30, samples=11
00:10:25.990     iops        : min=   12, max= 7290, avg=5466.55, stdev=2172.83, samples=11
00:10:25.990    write: IOPS=6403, BW=25.0MiB/s (26.2MB/s)(130MiB/5196msec); 0 zone resets
00:10:25.990      slat (usec): min=3, max=1921, avg=63.43, stdev=166.77
00:10:25.990      clat (usec): min=1316, max=14657, avg=7031.23, stdev=997.61
00:10:25.990       lat (usec): min=1341, max=14694, avg=7094.66, stdev=999.31
00:10:25.990      clat percentiles (usec):
00:10:25.990       |  1.00th=[ 3949],  5.00th=[ 5342], 10.00th=[ 6063], 20.00th=[ 6456],
00:10:25.990       | 30.00th=[ 6652], 40.00th=[ 6915], 50.00th=[ 7111], 60.00th=[ 7242],
00:10:25.990       | 70.00th=[ 7439], 80.00th=[ 7635], 90.00th=[ 7963], 95.00th=[ 8291],
00:10:25.990       | 99.00th=[10028], 99.50th=[10683], 99.90th=[12518], 99.95th=[13304],
00:10:25.990       | 99.99th=[14484]
00:10:25.990     bw (  KiB/s): min=   48, max=29336, per=85.68%, avg=21945.45, stdev=8715.04, samples=11
00:10:25.990     iops        : min=   12, max= 7334, avg=5486.36, stdev=2178.76, samples=11
00:10:25.990    lat (msec)   : 2=0.01%, 4=0.42%, 10=93.37%, 20=6.20%
00:10:25.990    cpu          : usr=5.61%, sys=21.00%, ctx=6280, majf=0, minf=127
00:10:25.990    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7%
00:10:25.990       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:10:25.990       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:10:25.990       issued rwts: total=63694,33271,0,0 short=0,0,0,0 dropped=0,0,0,0
00:10:25.990       latency   : target=0, window=0, percentile=100.00%, depth=128
00:10:25.990  
00:10:25.990  Run status group 0 (all jobs):
00:10:25.990     READ: bw=41.4MiB/s (43.4MB/s), 41.4MiB/s-41.4MiB/s (43.4MB/s-43.4MB/s), io=249MiB (261MB), run=6006-6006msec
00:10:25.990    WRITE: bw=25.0MiB/s (26.2MB/s), 25.0MiB/s-25.0MiB/s (26.2MB/s-26.2MB/s), io=130MiB (136MB), run=5196-5196msec
00:10:25.990  
00:10:25.990  Disk stats (read/write):
00:10:25.990    nvme0n1: ios=62761/32604, merge=0/0, ticks=489208/214944, in_queue=704152, util=98.65%
00:10:25.990   18:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized
00:10:25.990   18:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized
00:10:25.990   18:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized
00:10:25.990   18:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized
00:10:25.990   18:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:10:25.990   18:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state
00:10:25.990   18:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]]
00:10:25.990   18:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]]
00:10:25.990   18:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized
00:10:25.990   18:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized
00:10:25.990   18:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:10:25.990   18:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state
00:10:25.990   18:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:10:25.990   18:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]]
00:10:25.990   18:54:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s
00:10:27.370   18:54:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 ))
00:10:27.370   18:54:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:10:27.370   18:54:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]]
00:10:27.370   18:54:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin
00:10:27.370   18:54:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=83175
00:10:27.370   18:54:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v
00:10:27.370   18:54:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1
00:10:27.370  [global]
00:10:27.370  thread=1
00:10:27.370  invalidate=1
00:10:27.370  rw=randrw
00:10:27.370  time_based=1
00:10:27.370  runtime=6
00:10:27.370  ioengine=libaio
00:10:27.370  direct=1
00:10:27.370  bs=4096
00:10:27.370  iodepth=128
00:10:27.370  norandommap=0
00:10:27.370  numjobs=1
00:10:27.370  
00:10:27.370  verify_dump=1
00:10:27.370  verify_backlog=512
00:10:27.370  verify_state_save=0
00:10:27.370  do_verify=1
00:10:27.370  verify=crc32c-intel
00:10:27.370  [job0]
00:10:27.370  filename=/dev/nvme0n1
00:10:27.370  Could not set queue depth (nvme0n1)
00:10:27.370  job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:10:27.370  fio-3.35
00:10:27.370  Starting 1 thread
00:10:28.306   18:54:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible
00:10:28.564   18:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized
00:10:28.821   18:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible
00:10:28.822   18:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible
00:10:28.822   18:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:10:28.822   18:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state
00:10:28.822   18:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]]
00:10:28.822   18:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]]
00:10:28.822   18:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized
00:10:28.822   18:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized
00:10:28.822   18:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:10:28.822   18:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state
00:10:28.822   18:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:10:28.822   18:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]]
00:10:28.822   18:55:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s
00:10:29.755   18:55:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 ))
00:10:29.755   18:55:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:10:29.755   18:55:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]]
00:10:29.755   18:55:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized
00:10:30.014   18:55:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible
00:10:30.273   18:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized
00:10:30.273   18:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized
00:10:30.273   18:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:10:30.273   18:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state
00:10:30.273   18:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]]
00:10:30.273   18:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]]
00:10:30.273   18:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible
00:10:30.273   18:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible
00:10:30.273   18:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:10:30.273   18:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state
00:10:30.273   18:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:10:30.273   18:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]]
00:10:30.273   18:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s
00:10:31.209   18:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 ))
00:10:31.209   18:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:10:31.209   18:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]]
00:10:31.209   18:55:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 83175
00:10:33.742  
00:10:33.742  job0: (groupid=0, jobs=1): err= 0: pid=83202: Fri Dec 13 18:55:05 2024
00:10:33.742    read: IOPS=11.6k, BW=45.4MiB/s (47.6MB/s)(273MiB/6006msec)
00:10:33.742      slat (usec): min=4, max=5733, avg=44.37, stdev=221.14
00:10:33.742      clat (usec): min=407, max=15142, avg=7606.75, stdev=1667.91
00:10:33.742       lat (usec): min=481, max=15151, avg=7651.12, stdev=1686.72
00:10:33.742      clat percentiles (usec):
00:10:33.742       |  1.00th=[ 3130],  5.00th=[ 4555], 10.00th=[ 5276], 20.00th=[ 6259],
00:10:33.742       | 30.00th=[ 7177], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 8029],
00:10:33.742       | 70.00th=[ 8356], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[ 9896],
00:10:33.742       | 99.00th=[11994], 99.50th=[12387], 99.90th=[13435], 99.95th=[14091],
00:10:33.742       | 99.99th=[15008]
00:10:33.742     bw (  KiB/s): min= 4632, max=40087, per=53.07%, avg=24670.91, stdev=10415.02, samples=11
00:10:33.742     iops        : min= 1158, max=10021, avg=6167.55, stdev=2603.60, samples=11
00:10:33.742    write: IOPS=7062, BW=27.6MiB/s (28.9MB/s)(145MiB/5257msec); 0 zone resets
00:10:33.742      slat (usec): min=11, max=1986, avg=53.03, stdev=141.27
00:10:33.742      clat (usec): min=419, max=13748, avg=6224.51, stdev=1646.99
00:10:33.742       lat (usec): min=454, max=13771, avg=6277.55, stdev=1661.10
00:10:33.742      clat percentiles (usec):
00:10:33.742       |  1.00th=[ 2507],  5.00th=[ 3359], 10.00th=[ 3785], 20.00th=[ 4490],
00:10:33.742       | 30.00th=[ 5276], 40.00th=[ 6259], 50.00th=[ 6718], 60.00th=[ 7046],
00:10:33.742       | 70.00th=[ 7308], 80.00th=[ 7570], 90.00th=[ 7898], 95.00th=[ 8160],
00:10:33.742       | 99.00th=[ 9765], 99.50th=[10552], 99.90th=[12911], 99.95th=[13042],
00:10:33.742       | 99.99th=[13435]
00:10:33.742     bw (  KiB/s): min= 4752, max=40646, per=87.50%, avg=24718.82, stdev=10222.39, samples=11
00:10:33.742     iops        : min= 1188, max=10161, avg=6179.55, stdev=2555.47, samples=11
00:10:33.742    lat (usec)   : 500=0.01%, 750=0.03%, 1000=0.03%
00:10:33.742    lat (msec)   : 2=0.22%, 4=5.84%, 10=90.58%, 20=3.29%
00:10:33.742    cpu          : usr=5.78%, sys=23.38%, ctx=6882, majf=0, minf=114
00:10:33.742    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7%
00:10:33.742       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:10:33.742       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:10:33.742       issued rwts: total=69795,37126,0,0 short=0,0,0,0 dropped=0,0,0,0
00:10:33.742       latency   : target=0, window=0, percentile=100.00%, depth=128
00:10:33.742  
00:10:33.742  Run status group 0 (all jobs):
00:10:33.742     READ: bw=45.4MiB/s (47.6MB/s), 45.4MiB/s-45.4MiB/s (47.6MB/s-47.6MB/s), io=273MiB (286MB), run=6006-6006msec
00:10:33.742    WRITE: bw=27.6MiB/s (28.9MB/s), 27.6MiB/s-27.6MiB/s (28.9MB/s-28.9MB/s), io=145MiB (152MB), run=5257-5257msec
00:10:33.742  
00:10:33.742  Disk stats (read/write):
00:10:33.742    nvme0n1: ios=69034/36349, merge=0/0, ticks=491898/209862, in_queue=701760, util=98.62%
00:10:33.742   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:10:33.742  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s)
00:10:33.742   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:10:33.742   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0
00:10:33.743   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:10:33.743   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:10:33.743   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:10:33.743   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:10:33.743   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0
00:10:33.743   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:10:34.001   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state
00:10:34.001   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state
00:10:34.001   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT
00:10:34.001   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini
00:10:34.001   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup
00:10:34.001   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync
00:10:34.001   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:10:34.001   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e
00:10:34.001   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20}
00:10:34.001   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:10:34.001  rmmod nvme_tcp
00:10:34.001  rmmod nvme_fabrics
00:10:34.001  rmmod nvme_keyring
00:10:34.001   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:10:34.001   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e
00:10:34.001   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0
00:10:34.001   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 82893 ']'
00:10:34.001   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 82893
00:10:34.001   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 82893 ']'
00:10:34.001   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 82893
00:10:34.001    18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname
00:10:34.001   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:34.001    18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82893
00:10:34.001  killing process with pid 82893
00:10:34.001   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:34.001   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:34.001   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82893'
00:10:34.001   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 82893
00:10:34.001   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 82893
00:10:34.260   18:55:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:10:34.260   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:10:34.260   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:10:34.260   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr
00:10:34.260   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save
00:10:34.260   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:10:34.260   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore
00:10:34.260   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:10:34.260   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:10:34.260   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:10:34.260   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:10:34.260   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:10:34.260   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:10:34.260   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:10:34.519   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:10:34.519   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:10:34.519   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:10:34.519   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:10:34.519   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:10:34.519   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:10:34.519   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:10:34.519   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:10:34.519   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns
00:10:34.519   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:34.519   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:34.519    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:34.519   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0
00:10:34.519  
00:10:34.519  real	0m20.525s
00:10:34.519  user	1m18.898s
00:10:34.519  sys	0m7.283s
00:10:34.519   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:34.519   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x
00:10:34.519  ************************************
00:10:34.519  END TEST nvmf_target_multipath
00:10:34.519  ************************************
00:10:34.519   18:55:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp
00:10:34.519   18:55:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:10:34.519   18:55:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:34.519   18:55:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:10:34.519  ************************************
00:10:34.519  START TEST nvmf_zcopy
00:10:34.519  ************************************
00:10:34.519   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp
00:10:34.780  * Looking for test storage...
00:10:34.780  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:34.780     18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:34.780     18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-:
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-:
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<'
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:34.780     18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1
00:10:34.780     18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1
00:10:34.780     18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:34.780     18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1
00:10:34.780     18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2
00:10:34.780     18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2
00:10:34.780     18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:34.780     18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:34.780  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:34.780  		--rc genhtml_branch_coverage=1
00:10:34.780  		--rc genhtml_function_coverage=1
00:10:34.780  		--rc genhtml_legend=1
00:10:34.780  		--rc geninfo_all_blocks=1
00:10:34.780  		--rc geninfo_unexecuted_blocks=1
00:10:34.780  		
00:10:34.780  		'
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:34.780  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:34.780  		--rc genhtml_branch_coverage=1
00:10:34.780  		--rc genhtml_function_coverage=1
00:10:34.780  		--rc genhtml_legend=1
00:10:34.780  		--rc geninfo_all_blocks=1
00:10:34.780  		--rc geninfo_unexecuted_blocks=1
00:10:34.780  		
00:10:34.780  		'
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:34.780  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:34.780  		--rc genhtml_branch_coverage=1
00:10:34.780  		--rc genhtml_function_coverage=1
00:10:34.780  		--rc genhtml_legend=1
00:10:34.780  		--rc geninfo_all_blocks=1
00:10:34.780  		--rc geninfo_unexecuted_blocks=1
00:10:34.780  		
00:10:34.780  		'
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:34.780  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:34.780  		--rc genhtml_branch_coverage=1
00:10:34.780  		--rc genhtml_function_coverage=1
00:10:34.780  		--rc genhtml_legend=1
00:10:34.780  		--rc geninfo_all_blocks=1
00:10:34.780  		--rc geninfo_unexecuted_blocks=1
00:10:34.780  		
00:10:34.780  		'
00:10:34.780   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:10:34.780     18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:10:34.780     18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:10:34.780    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:10:34.780     18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob
00:10:34.780     18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:10:34.780     18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:10:34.780     18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:10:34.780      18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:34.781      18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:34.781      18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:34.781      18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH
00:10:34.781      18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:34.781    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0
00:10:34.781    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:10:34.781    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:10:34.781    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:10:34.781    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:10:34.781    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:10:34.781    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:10:34.781  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:10:34.781    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:10:34.781    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:10:34.781    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:34.781    18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:10:34.781  Cannot find device "nvmf_init_br"
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:10:34.781  Cannot find device "nvmf_init_br2"
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:10:34.781  Cannot find device "nvmf_tgt_br"
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:10:34.781  Cannot find device "nvmf_tgt_br2"
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:10:34.781  Cannot find device "nvmf_init_br"
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:10:34.781  Cannot find device "nvmf_init_br2"
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:10:34.781  Cannot find device "nvmf_tgt_br"
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true
00:10:34.781   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:10:35.040  Cannot find device "nvmf_tgt_br2"
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:10:35.040  Cannot find device "nvmf_br"
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:10:35.040  Cannot find device "nvmf_init_if"
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:10:35.040  Cannot find device "nvmf_init_if2"
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:10:35.040  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:10:35.040  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:10:35.040   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:10:35.041   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:10:35.041   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:10:35.041   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:10:35.041   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:10:35.041   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:10:35.041   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:10:35.300  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:10:35.300  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms
00:10:35.300  
00:10:35.300  --- 10.0.0.3 ping statistics ---
00:10:35.300  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:35.300  rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:10:35.300  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:10:35.300  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms
00:10:35.300  
00:10:35.300  --- 10.0.0.4 ping statistics ---
00:10:35.300  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:35.300  rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:10:35.300  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:10:35.300  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms
00:10:35.300  
00:10:35.300  --- 10.0.0.1 ping statistics ---
00:10:35.300  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:35.300  rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:10:35.300  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:10:35.300  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms
00:10:35.300  
00:10:35.300  --- 10.0.0.2 ping statistics ---
00:10:35.300  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:35.300  rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=83531
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 83531
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 83531 ']'
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:35.300  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:35.300   18:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:10:35.300  [2024-12-13 18:55:06.998017] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:10:35.300  [2024-12-13 18:55:06.998130] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:10:35.560  [2024-12-13 18:55:07.147383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:35.560  [2024-12-13 18:55:07.187211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:10:35.560  [2024-12-13 18:55:07.187302] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:10:35.560  [2024-12-13 18:55:07.187314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:10:35.560  [2024-12-13 18:55:07.187322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:10:35.560  [2024-12-13 18:55:07.187329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:10:35.560  [2024-12-13 18:55:07.187696] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:10:35.560   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:35.560   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0
00:10:35.560   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:10:35.560   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:35.560   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:10:35.560   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:10:35.560   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']'
00:10:35.560   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy
00:10:35.560   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:35.560   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:10:35.560  [2024-12-13 18:55:07.366652] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:10:35.560   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:35.560   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:10:35.560   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:35.560   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:10:35.560   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:35.560   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:10:35.560   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:35.560   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:10:35.819  [2024-12-13 18:55:07.382739] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:10:35.819   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:35.819   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420
00:10:35.819   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:35.819   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:10:35.819   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:35.819   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0
00:10:35.819   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:35.819   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:10:35.819  malloc0
00:10:35.819   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:35.819   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:10:35.819   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:35.819   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:10:35.819   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:35.819   18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192
00:10:35.819    18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json
00:10:35.819    18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=()
00:10:35.819    18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config
00:10:35.820    18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:10:35.820    18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:10:35.820  {
00:10:35.820    "params": {
00:10:35.820      "name": "Nvme$subsystem",
00:10:35.820      "trtype": "$TEST_TRANSPORT",
00:10:35.820      "traddr": "$NVMF_FIRST_TARGET_IP",
00:10:35.820      "adrfam": "ipv4",
00:10:35.820      "trsvcid": "$NVMF_PORT",
00:10:35.820      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:10:35.820      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:10:35.820      "hdgst": ${hdgst:-false},
00:10:35.820      "ddgst": ${ddgst:-false}
00:10:35.820    },
00:10:35.820    "method": "bdev_nvme_attach_controller"
00:10:35.820  }
00:10:35.820  EOF
00:10:35.820  )")
00:10:35.820     18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat
00:10:35.820    18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq .
00:10:35.820     18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=,
00:10:35.820     18:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:10:35.820    "params": {
00:10:35.820      "name": "Nvme1",
00:10:35.820      "trtype": "tcp",
00:10:35.820      "traddr": "10.0.0.3",
00:10:35.820      "adrfam": "ipv4",
00:10:35.820      "trsvcid": "4420",
00:10:35.820      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:10:35.820      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:10:35.820      "hdgst": false,
00:10:35.820      "ddgst": false
00:10:35.820    },
00:10:35.820    "method": "bdev_nvme_attach_controller"
00:10:35.820  }'
00:10:35.820  [2024-12-13 18:55:07.487877] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:10:35.820  [2024-12-13 18:55:07.487974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83574 ]
00:10:36.079  [2024-12-13 18:55:07.643183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:36.079  [2024-12-13 18:55:07.680943] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:10:36.079  Running I/O for 10 seconds...
00:10:38.413       7098.00 IOPS,    55.45 MiB/s
[2024-12-13T18:55:11.201Z]      7165.00 IOPS,    55.98 MiB/s
[2024-12-13T18:55:12.138Z]      7208.33 IOPS,    56.32 MiB/s
[2024-12-13T18:55:13.074Z]      7240.25 IOPS,    56.56 MiB/s
[2024-12-13T18:55:14.010Z]      7263.40 IOPS,    56.75 MiB/s
[2024-12-13T18:55:14.946Z]      7287.33 IOPS,    56.93 MiB/s
[2024-12-13T18:55:15.880Z]      7306.86 IOPS,    57.08 MiB/s
[2024-12-13T18:55:17.257Z]      7324.25 IOPS,    57.22 MiB/s
[2024-12-13T18:55:18.193Z]      7333.22 IOPS,    57.29 MiB/s
[2024-12-13T18:55:18.193Z]      7341.00 IOPS,    57.35 MiB/s
00:10:46.369                                                                                                  Latency(us)
00:10:46.369  
[2024-12-13T18:55:18.193Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:10:46.369  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192)
00:10:46.369  	 Verification LBA range: start 0x0 length 0x1000
00:10:46.369  	 Nvme1n1             :      10.01    7343.31      57.37       0.00     0.00   17374.90    2249.08   26452.71
00:10:46.369  
[2024-12-13T18:55:18.193Z]  ===================================================================================================================
00:10:46.369  
[2024-12-13T18:55:18.193Z]  Total                       :               7343.31      57.37       0.00     0.00   17374.90    2249.08   26452.71
00:10:46.369   18:55:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=83691
00:10:46.369   18:55:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable
00:10:46.369   18:55:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:10:46.369    18:55:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json
00:10:46.369   18:55:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192
00:10:46.369    18:55:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=()
00:10:46.369    18:55:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config
00:10:46.369    18:55:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:10:46.369    18:55:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:10:46.369  {
00:10:46.369    "params": {
00:10:46.369      "name": "Nvme$subsystem",
00:10:46.369      "trtype": "$TEST_TRANSPORT",
00:10:46.369      "traddr": "$NVMF_FIRST_TARGET_IP",
00:10:46.369      "adrfam": "ipv4",
00:10:46.369      "trsvcid": "$NVMF_PORT",
00:10:46.369      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:10:46.369      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:10:46.369      "hdgst": ${hdgst:-false},
00:10:46.369      "ddgst": ${ddgst:-false}
00:10:46.369    },
00:10:46.369    "method": "bdev_nvme_attach_controller"
00:10:46.369  }
00:10:46.369  EOF
00:10:46.369  )")
00:10:46.369     18:55:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat
00:10:46.369  [2024-12-13 18:55:18.079801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.369  [2024-12-13 18:55:18.079858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.369    18:55:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq .
00:10:46.369     18:55:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=,
00:10:46.369     18:55:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:10:46.369    "params": {
00:10:46.369      "name": "Nvme1",
00:10:46.369      "trtype": "tcp",
00:10:46.369      "traddr": "10.0.0.3",
00:10:46.369      "adrfam": "ipv4",
00:10:46.369      "trsvcid": "4420",
00:10:46.369      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:10:46.369      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:10:46.369      "hdgst": false,
00:10:46.369      "ddgst": false
00:10:46.369    },
00:10:46.369    "method": "bdev_nvme_attach_controller"
00:10:46.369  }'
00:10:46.369  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.369  [2024-12-13 18:55:18.091776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.369  [2024-12-13 18:55:18.091829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.369  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.369  [2024-12-13 18:55:18.103772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.369  [2024-12-13 18:55:18.103815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.369  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.369  [2024-12-13 18:55:18.115769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.369  [2024-12-13 18:55:18.115813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.369  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.369  [2024-12-13 18:55:18.127790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.369  [2024-12-13 18:55:18.127835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.369  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.369  [2024-12-13 18:55:18.139790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.369  [2024-12-13 18:55:18.139831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.369  [2024-12-13 18:55:18.139822] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:10:46.369  [2024-12-13 18:55:18.139901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83691 ]
00:10:46.369  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.369  [2024-12-13 18:55:18.151791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.369  [2024-12-13 18:55:18.151831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.369  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.369  [2024-12-13 18:55:18.163774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.369  [2024-12-13 18:55:18.163816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.369  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.369  [2024-12-13 18:55:18.175784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.369  [2024-12-13 18:55:18.175827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.369  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.369  [2024-12-13 18:55:18.187779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.369  [2024-12-13 18:55:18.187832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.629  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.629  [2024-12-13 18:55:18.199782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.629  [2024-12-13 18:55:18.199824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.629  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.629  [2024-12-13 18:55:18.211788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.629  [2024-12-13 18:55:18.211830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.629  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.629  [2024-12-13 18:55:18.223792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.629  [2024-12-13 18:55:18.223834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.629  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.629  [2024-12-13 18:55:18.235802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.629  [2024-12-13 18:55:18.235846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.629  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.629  [2024-12-13 18:55:18.247799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.629  [2024-12-13 18:55:18.247841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.629  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.629  [2024-12-13 18:55:18.259800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.629  [2024-12-13 18:55:18.259842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.629  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.629  [2024-12-13 18:55:18.271803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.629  [2024-12-13 18:55:18.271844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.629  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.629  [2024-12-13 18:55:18.283802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.629  [2024-12-13 18:55:18.283843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.629  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.629  [2024-12-13 18:55:18.288965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:46.629  [2024-12-13 18:55:18.295805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.629  [2024-12-13 18:55:18.295844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.629  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.629  [2024-12-13 18:55:18.307810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.629  [2024-12-13 18:55:18.307855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.629  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.629  [2024-12-13 18:55:18.319809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.629  [2024-12-13 18:55:18.319853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.629  [2024-12-13 18:55:18.323441] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:10:46.629  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.629  [2024-12-13 18:55:18.331823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.629  [2024-12-13 18:55:18.331866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.629  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.629  [2024-12-13 18:55:18.343840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.629  [2024-12-13 18:55:18.343882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.629  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.629  [2024-12-13 18:55:18.355847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.629  [2024-12-13 18:55:18.355890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.629  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.629  [2024-12-13 18:55:18.367855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.629  [2024-12-13 18:55:18.367898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.629  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.629  [2024-12-13 18:55:18.379857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.629  [2024-12-13 18:55:18.379904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.629  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.629  [2024-12-13 18:55:18.391851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.629  [2024-12-13 18:55:18.391894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.629  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.629  [2024-12-13 18:55:18.403880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.629  [2024-12-13 18:55:18.403925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.630  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.630  [2024-12-13 18:55:18.415882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.630  [2024-12-13 18:55:18.415928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.630  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.630  [2024-12-13 18:55:18.427878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.630  [2024-12-13 18:55:18.427920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.630  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.630  [2024-12-13 18:55:18.439882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.630  [2024-12-13 18:55:18.439928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.630  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.889  [2024-12-13 18:55:18.451945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.889  [2024-12-13 18:55:18.451996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.889  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.889  [2024-12-13 18:55:18.463917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.889  [2024-12-13 18:55:18.463962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.889  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.889  [2024-12-13 18:55:18.476010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.889  [2024-12-13 18:55:18.476057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.889  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.889  [2024-12-13 18:55:18.488016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.889  [2024-12-13 18:55:18.488063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.889  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.889  [2024-12-13 18:55:18.500008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.889  [2024-12-13 18:55:18.500051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.889  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.889  [2024-12-13 18:55:18.512024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.889  [2024-12-13 18:55:18.512072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.889  Running I/O for 5 seconds...
00:10:46.889  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.889  [2024-12-13 18:55:18.528648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.889  [2024-12-13 18:55:18.528696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.889  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.889  [2024-12-13 18:55:18.545453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.889  [2024-12-13 18:55:18.545507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.889  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.889  [2024-12-13 18:55:18.561753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.889  [2024-12-13 18:55:18.561804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.889  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.889  [2024-12-13 18:55:18.579481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.889  [2024-12-13 18:55:18.579531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.889  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.889  [2024-12-13 18:55:18.593713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.889  [2024-12-13 18:55:18.593778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.889  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.889  [2024-12-13 18:55:18.609044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.889  [2024-12-13 18:55:18.609094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.889  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.889  [2024-12-13 18:55:18.626045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.889  [2024-12-13 18:55:18.626095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.889  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.889  [2024-12-13 18:55:18.642643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.889  [2024-12-13 18:55:18.642694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.889  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.889  [2024-12-13 18:55:18.659309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.889  [2024-12-13 18:55:18.659361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.889  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.889  [2024-12-13 18:55:18.676016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.889  [2024-12-13 18:55:18.676066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.889  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.889  [2024-12-13 18:55:18.691932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.889  [2024-12-13 18:55:18.691982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.889  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:46.889  [2024-12-13 18:55:18.703511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:46.889  [2024-12-13 18:55:18.703563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:46.889  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.149  [2024-12-13 18:55:18.719047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.149  [2024-12-13 18:55:18.719098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.149  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.149  [2024-12-13 18:55:18.736468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.149  [2024-12-13 18:55:18.736515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.149  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.149  [2024-12-13 18:55:18.752567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.149  [2024-12-13 18:55:18.752618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.149  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.149  [2024-12-13 18:55:18.764522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.149  [2024-12-13 18:55:18.764572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.149  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.149  [2024-12-13 18:55:18.780961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.149  [2024-12-13 18:55:18.781010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.149  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.149  [2024-12-13 18:55:18.798169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.149  [2024-12-13 18:55:18.798244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.149  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.149  [2024-12-13 18:55:18.814306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.149  [2024-12-13 18:55:18.814353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.149  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.149  [2024-12-13 18:55:18.825318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.149  [2024-12-13 18:55:18.825364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.149  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.149  [2024-12-13 18:55:18.840470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.149  [2024-12-13 18:55:18.840519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.149  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.149  [2024-12-13 18:55:18.851469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.149  [2024-12-13 18:55:18.851519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.149  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.149  [2024-12-13 18:55:18.867741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.149  [2024-12-13 18:55:18.867791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.149  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.149  [2024-12-13 18:55:18.884434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.149  [2024-12-13 18:55:18.884486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.149  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.149  [2024-12-13 18:55:18.900242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.149  [2024-12-13 18:55:18.900302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.149  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.149  [2024-12-13 18:55:18.917287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.149  [2024-12-13 18:55:18.917329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.149  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.149  [2024-12-13 18:55:18.933622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.149  [2024-12-13 18:55:18.933657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.149  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.149  [2024-12-13 18:55:18.950073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.149  [2024-12-13 18:55:18.950106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.149  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.149  [2024-12-13 18:55:18.966900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.149  [2024-12-13 18:55:18.966944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.149  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.408  [2024-12-13 18:55:18.981776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.408  [2024-12-13 18:55:18.981828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.408  2024/12/13 18:55:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.408  [2024-12-13 18:55:18.998595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.408  [2024-12-13 18:55:18.998646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.408  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.408  [2024-12-13 18:55:19.014808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.408  [2024-12-13 18:55:19.014859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.408  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.408  [2024-12-13 18:55:19.031166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.408  [2024-12-13 18:55:19.031245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.408  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.408  [2024-12-13 18:55:19.048161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.408  [2024-12-13 18:55:19.048209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.408  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.409  [2024-12-13 18:55:19.065540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.409  [2024-12-13 18:55:19.065592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.409  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.409  [2024-12-13 18:55:19.081343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.409  [2024-12-13 18:55:19.081400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.409  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.409  [2024-12-13 18:55:19.099573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.409  [2024-12-13 18:55:19.099625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.409  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.409  [2024-12-13 18:55:19.114463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.409  [2024-12-13 18:55:19.114512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.409  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.409  [2024-12-13 18:55:19.125337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.409  [2024-12-13 18:55:19.125386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.409  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.409  [2024-12-13 18:55:19.141307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.409  [2024-12-13 18:55:19.141372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.409  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.409  [2024-12-13 18:55:19.157378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.409  [2024-12-13 18:55:19.157454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.409  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.409  [2024-12-13 18:55:19.175107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.409  [2024-12-13 18:55:19.175155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.409  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.409  [2024-12-13 18:55:19.189929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.409  [2024-12-13 18:55:19.189979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.409  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.409  [2024-12-13 18:55:19.205808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.409  [2024-12-13 18:55:19.205858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.409  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.409  [2024-12-13 18:55:19.222204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.409  [2024-12-13 18:55:19.222280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.409  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.668  [2024-12-13 18:55:19.239250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.668  [2024-12-13 18:55:19.239298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.668  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.668  [2024-12-13 18:55:19.255442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.668  [2024-12-13 18:55:19.255491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.668  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.668  [2024-12-13 18:55:19.272354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.668  [2024-12-13 18:55:19.272404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.668  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.668  [2024-12-13 18:55:19.288691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.668  [2024-12-13 18:55:19.288739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.668  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.668  [2024-12-13 18:55:19.306055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.668  [2024-12-13 18:55:19.306105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.668  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.668  [2024-12-13 18:55:19.321578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.668  [2024-12-13 18:55:19.321629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.669  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.669  [2024-12-13 18:55:19.332365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.669  [2024-12-13 18:55:19.332412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.669  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.669  [2024-12-13 18:55:19.347833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.669  [2024-12-13 18:55:19.347881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.669  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.669  [2024-12-13 18:55:19.364653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.669  [2024-12-13 18:55:19.364719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.669  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.669  [2024-12-13 18:55:19.380086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.669  [2024-12-13 18:55:19.380133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.669  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.669  [2024-12-13 18:55:19.394956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.669  [2024-12-13 18:55:19.395003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.669  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.669  [2024-12-13 18:55:19.410894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.669  [2024-12-13 18:55:19.410944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.669  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.669  [2024-12-13 18:55:19.426828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.669  [2024-12-13 18:55:19.426878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.669  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.669  [2024-12-13 18:55:19.437754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.669  [2024-12-13 18:55:19.437805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.669  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.669  [2024-12-13 18:55:19.453915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.669  [2024-12-13 18:55:19.453965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.669  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.669  [2024-12-13 18:55:19.470741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.669  [2024-12-13 18:55:19.470792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.669  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.669  [2024-12-13 18:55:19.487492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.669  [2024-12-13 18:55:19.487542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.928  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.928  [2024-12-13 18:55:19.504236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.928  [2024-12-13 18:55:19.504286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.928  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.928      13145.00 IOPS,   102.70 MiB/s
[2024-12-13T18:55:19.752Z] [2024-12-13 18:55:19.521031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.928  [2024-12-13 18:55:19.521081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.928  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.928  [2024-12-13 18:55:19.537782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.928  [2024-12-13 18:55:19.537833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.928  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.928  [2024-12-13 18:55:19.554565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.928  [2024-12-13 18:55:19.554631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.928  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.928  [2024-12-13 18:55:19.571511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.928  [2024-12-13 18:55:19.571563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.928  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.928  [2024-12-13 18:55:19.587705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.928  [2024-12-13 18:55:19.587756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.928  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.928  [2024-12-13 18:55:19.604600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.928  [2024-12-13 18:55:19.604662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.928  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.928  [2024-12-13 18:55:19.620510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.928  [2024-12-13 18:55:19.620560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.928  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.928  [2024-12-13 18:55:19.637601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.928  [2024-12-13 18:55:19.637656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.928  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.928  [2024-12-13 18:55:19.653737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.928  [2024-12-13 18:55:19.653786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.928  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.928  [2024-12-13 18:55:19.671120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.928  [2024-12-13 18:55:19.671171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.928  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.928  [2024-12-13 18:55:19.687118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.928  [2024-12-13 18:55:19.687167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.928  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.928  [2024-12-13 18:55:19.704675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.928  [2024-12-13 18:55:19.704725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.929  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.929  [2024-12-13 18:55:19.721248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.929  [2024-12-13 18:55:19.721296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.929  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:47.929  [2024-12-13 18:55:19.738173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:47.929  [2024-12-13 18:55:19.738247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:47.929  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.188  [2024-12-13 18:55:19.754676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.188  [2024-12-13 18:55:19.754727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.188  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.188  [2024-12-13 18:55:19.771994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.188  [2024-12-13 18:55:19.772042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.188  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.188  [2024-12-13 18:55:19.787745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.188  [2024-12-13 18:55:19.787796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.188  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.188  [2024-12-13 18:55:19.804798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.188  [2024-12-13 18:55:19.804848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.188  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.188  [2024-12-13 18:55:19.820732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.188  [2024-12-13 18:55:19.820781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.188  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.188  [2024-12-13 18:55:19.831955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.188  [2024-12-13 18:55:19.832004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.188  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.188  [2024-12-13 18:55:19.847066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.188  [2024-12-13 18:55:19.847115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.188  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.188  [2024-12-13 18:55:19.858794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.188  [2024-12-13 18:55:19.858841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.188  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.188  [2024-12-13 18:55:19.873908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.188  [2024-12-13 18:55:19.873956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.188  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.188  [2024-12-13 18:55:19.889990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.188  [2024-12-13 18:55:19.890040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.188  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.188  [2024-12-13 18:55:19.906726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.188  [2024-12-13 18:55:19.906775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.188  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.188  [2024-12-13 18:55:19.922927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.188  [2024-12-13 18:55:19.922993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.188  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.188  [2024-12-13 18:55:19.940434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.188  [2024-12-13 18:55:19.940482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.189  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.189  [2024-12-13 18:55:19.956551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.189  [2024-12-13 18:55:19.956597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.189  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.189  [2024-12-13 18:55:19.974755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.189  [2024-12-13 18:55:19.974804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.189  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.189  [2024-12-13 18:55:19.989825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.189  [2024-12-13 18:55:19.989858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.189  2024/12/13 18:55:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.189  [2024-12-13 18:55:20.000452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.189  [2024-12-13 18:55:20.000485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.189  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.448  [2024-12-13 18:55:20.014929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.448  [2024-12-13 18:55:20.014976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.448  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.448  [2024-12-13 18:55:20.031049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.448  [2024-12-13 18:55:20.031099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.448  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.448  [2024-12-13 18:55:20.047210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.448  [2024-12-13 18:55:20.047287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.448  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.448  [2024-12-13 18:55:20.064444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.448  [2024-12-13 18:55:20.064494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.448  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.448  [2024-12-13 18:55:20.081267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.448  [2024-12-13 18:55:20.081315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.448  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.448  [2024-12-13 18:55:20.097556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.448  [2024-12-13 18:55:20.097610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.448  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.448  [2024-12-13 18:55:20.109534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.448  [2024-12-13 18:55:20.109584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.448  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.448  [2024-12-13 18:55:20.124892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.448  [2024-12-13 18:55:20.124943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.448  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.448  [2024-12-13 18:55:20.136335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.448  [2024-12-13 18:55:20.136386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.448  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.448  [2024-12-13 18:55:20.152419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.448  [2024-12-13 18:55:20.152469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.448  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.448  [2024-12-13 18:55:20.168867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.448  [2024-12-13 18:55:20.168915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.448  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.448  [2024-12-13 18:55:20.184494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.448  [2024-12-13 18:55:20.184542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.448  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.448  [2024-12-13 18:55:20.199272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.448  [2024-12-13 18:55:20.199323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.448  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.448  [2024-12-13 18:55:20.210301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.448  [2024-12-13 18:55:20.210349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.448  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.448  [2024-12-13 18:55:20.226358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.448  [2024-12-13 18:55:20.226407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.448  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.448  [2024-12-13 18:55:20.242728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.448  [2024-12-13 18:55:20.242777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.448  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.449  [2024-12-13 18:55:20.260346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.449  [2024-12-13 18:55:20.260395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.449  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.708  [2024-12-13 18:55:20.276032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.708  [2024-12-13 18:55:20.276080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.708  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.708  [2024-12-13 18:55:20.293314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.708  [2024-12-13 18:55:20.293362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.708  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.708  [2024-12-13 18:55:20.308919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.708  [2024-12-13 18:55:20.308969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.708  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.708  [2024-12-13 18:55:20.320004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.708  [2024-12-13 18:55:20.320053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.708  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.708  [2024-12-13 18:55:20.336313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.708  [2024-12-13 18:55:20.336362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.708  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.708  [2024-12-13 18:55:20.352628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.708  [2024-12-13 18:55:20.352678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.708  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.708  [2024-12-13 18:55:20.370064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.709  [2024-12-13 18:55:20.370114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.709  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.709  [2024-12-13 18:55:20.387161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.709  [2024-12-13 18:55:20.387208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.709  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.709  [2024-12-13 18:55:20.402327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.709  [2024-12-13 18:55:20.402372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.709  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.709  [2024-12-13 18:55:20.419000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.709  [2024-12-13 18:55:20.419046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.709  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.709  [2024-12-13 18:55:20.434164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.709  [2024-12-13 18:55:20.434213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.709  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.709  [2024-12-13 18:55:20.449265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.709  [2024-12-13 18:55:20.449314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.709  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.709  [2024-12-13 18:55:20.460842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.709  [2024-12-13 18:55:20.460890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.709  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.709  [2024-12-13 18:55:20.477250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.709  [2024-12-13 18:55:20.477306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.709  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.709  [2024-12-13 18:55:20.493223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.709  [2024-12-13 18:55:20.493279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.709  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.709  [2024-12-13 18:55:20.504444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.709  [2024-12-13 18:55:20.504494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.709  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.709      13272.00 IOPS,   103.69 MiB/s
[2024-12-13T18:55:20.533Z] [2024-12-13 18:55:20.520253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.709  [2024-12-13 18:55:20.520303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.709  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.968  [2024-12-13 18:55:20.537000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.968  [2024-12-13 18:55:20.537050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.968  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.968  [2024-12-13 18:55:20.553567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.968  [2024-12-13 18:55:20.553620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.968  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.968  [2024-12-13 18:55:20.570224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.968  [2024-12-13 18:55:20.570300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.968  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.968  [2024-12-13 18:55:20.587020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.968  [2024-12-13 18:55:20.587071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.968  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.968  [2024-12-13 18:55:20.604174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.968  [2024-12-13 18:55:20.604319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.968  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.968  [2024-12-13 18:55:20.621011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.968  [2024-12-13 18:55:20.621063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.968  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.968  [2024-12-13 18:55:20.637356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.969  [2024-12-13 18:55:20.637427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.969  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.969  [2024-12-13 18:55:20.648419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.969  [2024-12-13 18:55:20.648469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.969  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.969  [2024-12-13 18:55:20.664531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.969  [2024-12-13 18:55:20.664583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.969  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.969  [2024-12-13 18:55:20.680762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.969  [2024-12-13 18:55:20.680810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.969  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.969  [2024-12-13 18:55:20.697645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.969  [2024-12-13 18:55:20.697711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.969  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.969  [2024-12-13 18:55:20.714738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.969  [2024-12-13 18:55:20.714789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.969  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.969  [2024-12-13 18:55:20.731308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.969  [2024-12-13 18:55:20.731356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.969  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.969  [2024-12-13 18:55:20.747387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.969  [2024-12-13 18:55:20.747437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.969  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.969  [2024-12-13 18:55:20.764724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.969  [2024-12-13 18:55:20.764774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.969  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:48.969  [2024-12-13 18:55:20.780839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:48.969  [2024-12-13 18:55:20.780888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:48.969  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.228  [2024-12-13 18:55:20.798322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.228  [2024-12-13 18:55:20.798355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.228  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.228  [2024-12-13 18:55:20.815238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.228  [2024-12-13 18:55:20.815286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.228  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.228  [2024-12-13 18:55:20.831378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.228  [2024-12-13 18:55:20.831430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.228  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.228  [2024-12-13 18:55:20.848264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.228  [2024-12-13 18:55:20.848293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.228  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.228  [2024-12-13 18:55:20.865364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.228  [2024-12-13 18:55:20.865431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.229  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.229  [2024-12-13 18:55:20.881423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.229  [2024-12-13 18:55:20.881456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.229  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.229  [2024-12-13 18:55:20.898715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.229  [2024-12-13 18:55:20.898764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.229  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.229  [2024-12-13 18:55:20.914799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.229  [2024-12-13 18:55:20.914849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.229  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.229  [2024-12-13 18:55:20.926205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.229  [2024-12-13 18:55:20.926268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.229  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.229  [2024-12-13 18:55:20.942120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.229  [2024-12-13 18:55:20.942171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.229  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.229  [2024-12-13 18:55:20.958507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.229  [2024-12-13 18:55:20.958559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.229  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.229  [2024-12-13 18:55:20.975010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.229  [2024-12-13 18:55:20.975060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.229  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.229  [2024-12-13 18:55:20.992030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.229  [2024-12-13 18:55:20.992081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.229  2024/12/13 18:55:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.229  [2024-12-13 18:55:21.009098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.229  [2024-12-13 18:55:21.009148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.229  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.229  [2024-12-13 18:55:21.025100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.229  [2024-12-13 18:55:21.025147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.229  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.229  [2024-12-13 18:55:21.035407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.229  [2024-12-13 18:55:21.035441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.229  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.489  [2024-12-13 18:55:21.050387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.489  [2024-12-13 18:55:21.050420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.489  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.489  [2024-12-13 18:55:21.065643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.489  [2024-12-13 18:55:21.065692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.489  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.489  [2024-12-13 18:55:21.075743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.489  [2024-12-13 18:55:21.075776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.489  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.489  [2024-12-13 18:55:21.090498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.489  [2024-12-13 18:55:21.090528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.489  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.489  [2024-12-13 18:55:21.105497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.489  [2024-12-13 18:55:21.105532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.489  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.489  [2024-12-13 18:55:21.121542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.489  [2024-12-13 18:55:21.121580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.489  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.489  [2024-12-13 18:55:21.139167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.489  [2024-12-13 18:55:21.139216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.489  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.489  [2024-12-13 18:55:21.153831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.489  [2024-12-13 18:55:21.153879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.489  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.489  [2024-12-13 18:55:21.170838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.489  [2024-12-13 18:55:21.170886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.489  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.489  [2024-12-13 18:55:21.185882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.489  [2024-12-13 18:55:21.185948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.489  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.489  [2024-12-13 18:55:21.201270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.489  [2024-12-13 18:55:21.201324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.489  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.489  [2024-12-13 18:55:21.212369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.489  [2024-12-13 18:55:21.212416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.489  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.489  [2024-12-13 18:55:21.227437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.489  [2024-12-13 18:55:21.227484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.489  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.489  [2024-12-13 18:55:21.239603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.489  [2024-12-13 18:55:21.239652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.489  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.489  [2024-12-13 18:55:21.254794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.489  [2024-12-13 18:55:21.254844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.489  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.489  [2024-12-13 18:55:21.270431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.489  [2024-12-13 18:55:21.270480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.489  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.489  [2024-12-13 18:55:21.287757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.489  [2024-12-13 18:55:21.287805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.489  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.489  [2024-12-13 18:55:21.304295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.489  [2024-12-13 18:55:21.304343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.489  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.749  [2024-12-13 18:55:21.320029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.749  [2024-12-13 18:55:21.320076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.749  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.749  [2024-12-13 18:55:21.331315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.749  [2024-12-13 18:55:21.331344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.749  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.749  [2024-12-13 18:55:21.348171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.749  [2024-12-13 18:55:21.348246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.749  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.749  [2024-12-13 18:55:21.362912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.749  [2024-12-13 18:55:21.362962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.749  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.749  [2024-12-13 18:55:21.378727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.749  [2024-12-13 18:55:21.378774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.750  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.750  [2024-12-13 18:55:21.394473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.750  [2024-12-13 18:55:21.394523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.750  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.750  [2024-12-13 18:55:21.412409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.750  [2024-12-13 18:55:21.412440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.750  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.750  [2024-12-13 18:55:21.427677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.750  [2024-12-13 18:55:21.427723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.750  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.750  [2024-12-13 18:55:21.437937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.750  [2024-12-13 18:55:21.437985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.750  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.750  [2024-12-13 18:55:21.451878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.750  [2024-12-13 18:55:21.451923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.750  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.750  [2024-12-13 18:55:21.466829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.750  [2024-12-13 18:55:21.466895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.750  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.750  [2024-12-13 18:55:21.482667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.750  [2024-12-13 18:55:21.482715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.750  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.750  [2024-12-13 18:55:21.499673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.750  [2024-12-13 18:55:21.499721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.750  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.750  [2024-12-13 18:55:21.515992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.750  [2024-12-13 18:55:21.516041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.750  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.750      13240.00 IOPS,   103.44 MiB/s
[2024-12-13T18:55:21.574Z] [2024-12-13 18:55:21.527862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.750  [2024-12-13 18:55:21.527910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.750  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.750  [2024-12-13 18:55:21.543497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.750  [2024-12-13 18:55:21.543532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.750  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:49.750  [2024-12-13 18:55:21.560110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:49.750  [2024-12-13 18:55:21.560157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:49.750  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.010  [2024-12-13 18:55:21.576768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.010  [2024-12-13 18:55:21.576815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.010  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.010  [2024-12-13 18:55:21.593686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.010  [2024-12-13 18:55:21.593722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.010  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.010  [2024-12-13 18:55:21.610633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.010  [2024-12-13 18:55:21.610680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.010  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.010  [2024-12-13 18:55:21.627026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.010  [2024-12-13 18:55:21.627073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.010  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.010  [2024-12-13 18:55:21.644826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.010  [2024-12-13 18:55:21.644875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.010  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.010  [2024-12-13 18:55:21.660853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.010  [2024-12-13 18:55:21.660902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.010  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.010  [2024-12-13 18:55:21.677525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.010  [2024-12-13 18:55:21.677558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.010  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.010  [2024-12-13 18:55:21.694419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.010  [2024-12-13 18:55:21.694467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.010  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.010  [2024-12-13 18:55:21.710042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.010  [2024-12-13 18:55:21.710090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.010  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.010  [2024-12-13 18:55:21.722007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.010  [2024-12-13 18:55:21.722056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.010  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.010  [2024-12-13 18:55:21.738027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.010  [2024-12-13 18:55:21.738076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.011  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.011  [2024-12-13 18:55:21.754047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.011  [2024-12-13 18:55:21.754095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.011  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.011  [2024-12-13 18:55:21.768603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.011  [2024-12-13 18:55:21.768650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.011  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.011  [2024-12-13 18:55:21.784830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.011  [2024-12-13 18:55:21.784878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.011  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.011  [2024-12-13 18:55:21.801358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.011  [2024-12-13 18:55:21.801390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.011  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.011  [2024-12-13 18:55:21.818084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.011  [2024-12-13 18:55:21.818132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.011  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.338  [2024-12-13 18:55:21.834343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.338  [2024-12-13 18:55:21.834376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.338  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.338  [2024-12-13 18:55:21.851321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.338  [2024-12-13 18:55:21.851368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.338  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.338  [2024-12-13 18:55:21.868160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.338  [2024-12-13 18:55:21.868208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.338  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.338  [2024-12-13 18:55:21.884864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.338  [2024-12-13 18:55:21.884909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.338  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.338  [2024-12-13 18:55:21.901648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.338  [2024-12-13 18:55:21.901682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.338  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.338  [2024-12-13 18:55:21.918798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.338  [2024-12-13 18:55:21.918845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.338  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.338  [2024-12-13 18:55:21.935227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.338  [2024-12-13 18:55:21.935285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.338  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.338  [2024-12-13 18:55:21.953114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.338  [2024-12-13 18:55:21.953160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.338  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.338  [2024-12-13 18:55:21.967322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.338  [2024-12-13 18:55:21.967371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.338  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.338  [2024-12-13 18:55:21.984439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.338  [2024-12-13 18:55:21.984486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.338  2024/12/13 18:55:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.338  [2024-12-13 18:55:21.999333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.338  [2024-12-13 18:55:21.999364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.338  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.338  [2024-12-13 18:55:22.014648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.338  [2024-12-13 18:55:22.014696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.338  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.338  [2024-12-13 18:55:22.025825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.338  [2024-12-13 18:55:22.025873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.338  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.338  [2024-12-13 18:55:22.042371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.338  [2024-12-13 18:55:22.042420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.338  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.338  [2024-12-13 18:55:22.058190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.338  [2024-12-13 18:55:22.058269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.338  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.338  [2024-12-13 18:55:22.074398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.338  [2024-12-13 18:55:22.074446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.338  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.338  [2024-12-13 18:55:22.091727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.338  [2024-12-13 18:55:22.091776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.338  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.338  [2024-12-13 18:55:22.106518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.338  [2024-12-13 18:55:22.106552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.338  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.338  [2024-12-13 18:55:22.122538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.338  [2024-12-13 18:55:22.122602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.339  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.339  [2024-12-13 18:55:22.140335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.339  [2024-12-13 18:55:22.140368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.339  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.619  [2024-12-13 18:55:22.154647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.619  [2024-12-13 18:55:22.154678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.620  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.620  [2024-12-13 18:55:22.172616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.620  [2024-12-13 18:55:22.172664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.620  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.620  [2024-12-13 18:55:22.186774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.620  [2024-12-13 18:55:22.186822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.620  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.620  [2024-12-13 18:55:22.203324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.620  [2024-12-13 18:55:22.203359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.620  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.620  [2024-12-13 18:55:22.219901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.620  [2024-12-13 18:55:22.219949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.620  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.620  [2024-12-13 18:55:22.235190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.620  [2024-12-13 18:55:22.235262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.620  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.620  [2024-12-13 18:55:22.250612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.620  [2024-12-13 18:55:22.250660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.620  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.620  [2024-12-13 18:55:22.267627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.620  [2024-12-13 18:55:22.267676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.620  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.620  [2024-12-13 18:55:22.284517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.620  [2024-12-13 18:55:22.284566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.620  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.620  [2024-12-13 18:55:22.302137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.620  [2024-12-13 18:55:22.302185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.620  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.620  [2024-12-13 18:55:22.317773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.620  [2024-12-13 18:55:22.317820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.620  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.620  [2024-12-13 18:55:22.329054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.620  [2024-12-13 18:55:22.329099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.620  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.620  [2024-12-13 18:55:22.345051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.620  [2024-12-13 18:55:22.345098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.620  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.620  [2024-12-13 18:55:22.362005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.620  [2024-12-13 18:55:22.362054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.620  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.620  [2024-12-13 18:55:22.378581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.620  [2024-12-13 18:55:22.378632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.620  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.620  [2024-12-13 18:55:22.395697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.620  [2024-12-13 18:55:22.395727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.620  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.620  [2024-12-13 18:55:22.411139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.620  [2024-12-13 18:55:22.411172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.620  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.620  [2024-12-13 18:55:22.426497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.620  [2024-12-13 18:55:22.426530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.620  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.620  [2024-12-13 18:55:22.436921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.620  [2024-12-13 18:55:22.436970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.620  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.880  [2024-12-13 18:55:22.451410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.880  [2024-12-13 18:55:22.451462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.880  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.880  [2024-12-13 18:55:22.469800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.880  [2024-12-13 18:55:22.469850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.880  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.880  [2024-12-13 18:55:22.484145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.880  [2024-12-13 18:55:22.484197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.880  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.880  [2024-12-13 18:55:22.499731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.880  [2024-12-13 18:55:22.499782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.881  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.881  [2024-12-13 18:55:22.516486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.881  [2024-12-13 18:55:22.516538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.881      13260.00 IOPS,   103.59 MiB/s
[2024-12-13T18:55:22.705Z] 2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.881  [2024-12-13 18:55:22.532886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.881  [2024-12-13 18:55:22.532936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.881  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.881  [2024-12-13 18:55:22.550532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.881  [2024-12-13 18:55:22.550582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.881  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.881  [2024-12-13 18:55:22.565559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.881  [2024-12-13 18:55:22.565610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.881  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.881  [2024-12-13 18:55:22.580610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.881  [2024-12-13 18:55:22.580661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.881  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.881  [2024-12-13 18:55:22.591450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.881  [2024-12-13 18:55:22.591502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.881  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.881  [2024-12-13 18:55:22.607441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.881  [2024-12-13 18:55:22.607492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.881  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.881  [2024-12-13 18:55:22.623909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.881  [2024-12-13 18:55:22.623956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.881  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.881  [2024-12-13 18:55:22.641026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.881  [2024-12-13 18:55:22.641073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.881  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.881  [2024-12-13 18:55:22.657284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.881  [2024-12-13 18:55:22.657334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.881  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.881  [2024-12-13 18:55:22.674106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.881  [2024-12-13 18:55:22.674156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.881  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:50.881  [2024-12-13 18:55:22.690529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:50.881  [2024-12-13 18:55:22.690580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:50.881  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.142  [2024-12-13 18:55:22.707755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.142  [2024-12-13 18:55:22.707804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.142  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.142  [2024-12-13 18:55:22.723869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.142  [2024-12-13 18:55:22.723918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.142  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.142  [2024-12-13 18:55:22.735926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.142  [2024-12-13 18:55:22.735974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.142  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.142  [2024-12-13 18:55:22.752325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.142  [2024-12-13 18:55:22.752373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.142  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.142  [2024-12-13 18:55:22.768641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.142  [2024-12-13 18:55:22.768690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.142  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.142  [2024-12-13 18:55:22.786001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.142  [2024-12-13 18:55:22.786050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.142  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.142  [2024-12-13 18:55:22.802369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.142  [2024-12-13 18:55:22.802419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.142  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.142  [2024-12-13 18:55:22.818995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.142  [2024-12-13 18:55:22.819043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.142  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.142  [2024-12-13 18:55:22.836007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.142  [2024-12-13 18:55:22.836056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.142  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.142  [2024-12-13 18:55:22.852935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.142  [2024-12-13 18:55:22.852982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.142  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.142  [2024-12-13 18:55:22.869613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.142  [2024-12-13 18:55:22.869649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.142  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.142  [2024-12-13 18:55:22.886331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.142  [2024-12-13 18:55:22.886380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.142  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.142  [2024-12-13 18:55:22.903520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.142  [2024-12-13 18:55:22.903571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.142  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.142  [2024-12-13 18:55:22.920606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.142  [2024-12-13 18:55:22.920672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.142  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.142  [2024-12-13 18:55:22.936596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.142  [2024-12-13 18:55:22.936674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.142  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.142  [2024-12-13 18:55:22.947772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.142  [2024-12-13 18:55:22.947822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.142  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.142  [2024-12-13 18:55:22.963724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.142  [2024-12-13 18:55:22.963773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.402  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.402  [2024-12-13 18:55:22.980655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.402  [2024-12-13 18:55:22.980705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.402  2024/12/13 18:55:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.402  [2024-12-13 18:55:22.997050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.402  [2024-12-13 18:55:22.997097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.402  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.402  [2024-12-13 18:55:23.014202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.402  [2024-12-13 18:55:23.014263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.402  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.402  [2024-12-13 18:55:23.030503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.402  [2024-12-13 18:55:23.030553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.402  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.402  [2024-12-13 18:55:23.046601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.402  [2024-12-13 18:55:23.046650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.402  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.402  [2024-12-13 18:55:23.063478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.402  [2024-12-13 18:55:23.063531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.402  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.402  [2024-12-13 18:55:23.080607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.402  [2024-12-13 18:55:23.080653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.402  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.402  [2024-12-13 18:55:23.097582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.402  [2024-12-13 18:55:23.097616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.402  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.402  [2024-12-13 18:55:23.114018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.402  [2024-12-13 18:55:23.114070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.402  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.402  [2024-12-13 18:55:23.130952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.402  [2024-12-13 18:55:23.131003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.402  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.402  [2024-12-13 18:55:23.147774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.402  [2024-12-13 18:55:23.147826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.402  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.402  [2024-12-13 18:55:23.164664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.402  [2024-12-13 18:55:23.164711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.402  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.403  [2024-12-13 18:55:23.181554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.403  [2024-12-13 18:55:23.181592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.403  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.403  [2024-12-13 18:55:23.196719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.403  [2024-12-13 18:55:23.196774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.403  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.403  [2024-12-13 18:55:23.211678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.403  [2024-12-13 18:55:23.211728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.403  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.662  [2024-12-13 18:55:23.227556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.662  [2024-12-13 18:55:23.227604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.662  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.662  [2024-12-13 18:55:23.244703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.662  [2024-12-13 18:55:23.244753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.662  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.662  [2024-12-13 18:55:23.259855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.662  [2024-12-13 18:55:23.259904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.662  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.662  [2024-12-13 18:55:23.275262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.662  [2024-12-13 18:55:23.275310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.662  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.662  [2024-12-13 18:55:23.292732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.662  [2024-12-13 18:55:23.292782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.662  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.662  [2024-12-13 18:55:23.309084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.662  [2024-12-13 18:55:23.309130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.662  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.662  [2024-12-13 18:55:23.325816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.662  [2024-12-13 18:55:23.325867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.662  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.662  [2024-12-13 18:55:23.342309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.662  [2024-12-13 18:55:23.342360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.662  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.662  [2024-12-13 18:55:23.359360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.662  [2024-12-13 18:55:23.359410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.662  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.663  [2024-12-13 18:55:23.375033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.663  [2024-12-13 18:55:23.375081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.663  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.663  [2024-12-13 18:55:23.391514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.663  [2024-12-13 18:55:23.391551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.663  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.663  [2024-12-13 18:55:23.407955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.663  [2024-12-13 18:55:23.407988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.663  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.663  [2024-12-13 18:55:23.424984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.663  [2024-12-13 18:55:23.425018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.663  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.663  [2024-12-13 18:55:23.442106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.663  [2024-12-13 18:55:23.442319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.663  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.663  [2024-12-13 18:55:23.457022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.663  [2024-12-13 18:55:23.457056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.663  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.663  [2024-12-13 18:55:23.475496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.663  [2024-12-13 18:55:23.475530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.663  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.923  [2024-12-13 18:55:23.489483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.923  [2024-12-13 18:55:23.489667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.923  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.923  [2024-12-13 18:55:23.507072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.923  [2024-12-13 18:55:23.507106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.923  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.923      13285.20 IOPS,   103.79 MiB/s
[2024-12-13T18:55:23.747Z] [2024-12-13 18:55:23.522543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.923  [2024-12-13 18:55:23.522578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.923  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.923  
00:10:51.923                                                                                                  Latency(us)
00:10:51.923  
[2024-12-13T18:55:23.747Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:10:51.923  Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192)
00:10:51.923  	 Nvme1n1             :       5.01   13285.85     103.80       0.00     0.00    9622.09    4110.89   18945.86
00:10:51.923  
[2024-12-13T18:55:23.747Z]  ===================================================================================================================
00:10:51.923  
[2024-12-13T18:55:23.747Z]  Total                       :              13285.85     103.80       0.00     0.00    9622.09    4110.89   18945.86
00:10:51.923  [2024-12-13 18:55:23.531467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.923  [2024-12-13 18:55:23.531499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.923  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.923  [2024-12-13 18:55:23.543465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.923  [2024-12-13 18:55:23.543496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.923  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.923  [2024-12-13 18:55:23.555481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.923  [2024-12-13 18:55:23.555516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.923  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.923  [2024-12-13 18:55:23.567483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.923  [2024-12-13 18:55:23.567685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.923  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.923  [2024-12-13 18:55:23.579495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.923  [2024-12-13 18:55:23.579531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.923  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.923  [2024-12-13 18:55:23.591490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.923  [2024-12-13 18:55:23.591708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.923  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.923  [2024-12-13 18:55:23.603501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.923  [2024-12-13 18:55:23.603536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.923  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.923  [2024-12-13 18:55:23.615507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.923  [2024-12-13 18:55:23.615542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.923  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.923  [2024-12-13 18:55:23.627514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.923  [2024-12-13 18:55:23.627548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.923  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.923  [2024-12-13 18:55:23.639522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.923  [2024-12-13 18:55:23.639561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.924  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.924  [2024-12-13 18:55:23.651520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.924  [2024-12-13 18:55:23.651557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.924  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.924  [2024-12-13 18:55:23.663513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.924  [2024-12-13 18:55:23.663735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.924  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.924  [2024-12-13 18:55:23.675515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.924  [2024-12-13 18:55:23.675544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.924  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.924  [2024-12-13 18:55:23.687533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.924  [2024-12-13 18:55:23.687569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.924  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.924  [2024-12-13 18:55:23.699510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.924  [2024-12-13 18:55:23.699537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.924  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.924  [2024-12-13 18:55:23.711511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:10:51.924  [2024-12-13 18:55:23.711535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:10:51.924  2024/12/13 18:55:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:10:51.924  /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (83691) - No such process
00:10:51.924   18:55:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 83691
00:10:51.924   18:55:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:51.924   18:55:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:51.924   18:55:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:10:51.924   18:55:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:51.924   18:55:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:10:51.924   18:55:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:51.924   18:55:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:10:51.924  delay0
00:10:51.924   18:55:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:51.924   18:55:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1
00:10:51.924   18:55:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:51.924   18:55:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:10:52.183   18:55:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:52.183   18:55:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1'
00:10:52.183  [2024-12-13 18:55:23.905080] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral
00:10:58.752  Initializing NVMe Controllers
00:10:58.752  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:10:58.752  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:10:58.753  Initialization complete. Launching workers.
00:10:58.753  NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 130
00:10:58.753  CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 417, failed to submit 33
00:10:58.753  	 success 259, unsuccessful 158, failed 0
00:10:58.753   18:55:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT
00:10:58.753   18:55:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini
00:10:58.753   18:55:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup
00:10:58.753   18:55:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync
00:10:58.753   18:55:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:10:58.753   18:55:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e
00:10:58.753   18:55:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20}
00:10:58.753   18:55:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:10:58.753  rmmod nvme_tcp
00:10:58.753  rmmod nvme_fabrics
00:10:58.753  rmmod nvme_keyring
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 83531 ']'
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 83531
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 83531 ']'
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 83531
00:10:58.753    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:58.753    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83531
00:10:58.753  killing process with pid 83531
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83531'
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 83531
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 83531
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:58.753    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0
00:10:58.753  
00:10:58.753  real	0m24.224s
00:10:58.753  user	0m39.333s
00:10:58.753  sys	0m6.566s
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:58.753  ************************************
00:10:58.753  END TEST nvmf_zcopy
00:10:58.753  ************************************
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:10:58.753  ************************************
00:10:58.753  START TEST nvmf_nmic
00:10:58.753  ************************************
00:10:58.753   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp
00:10:59.012  * Looking for test storage...
00:10:59.012  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:10:59.012    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:59.012     18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version
00:10:59.012     18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:59.012    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:59.012    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:59.012    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:59.012    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:59.012    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-:
00:10:59.012    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1
00:10:59.012    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-:
00:10:59.012    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2
00:10:59.012    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<'
00:10:59.012    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2
00:10:59.012    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1
00:10:59.012    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:59.012    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in
00:10:59.012    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1
00:10:59.012    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:59.012    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:59.012     18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1
00:10:59.012     18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1
00:10:59.012     18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:59.012     18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1
00:10:59.012    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1
00:10:59.012     18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2
00:10:59.012     18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2
00:10:59.012     18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:59.012     18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2
00:10:59.012    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2
00:10:59.012    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:59.012    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:59.013  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:59.013  		--rc genhtml_branch_coverage=1
00:10:59.013  		--rc genhtml_function_coverage=1
00:10:59.013  		--rc genhtml_legend=1
00:10:59.013  		--rc geninfo_all_blocks=1
00:10:59.013  		--rc geninfo_unexecuted_blocks=1
00:10:59.013  		
00:10:59.013  		'
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:59.013  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:59.013  		--rc genhtml_branch_coverage=1
00:10:59.013  		--rc genhtml_function_coverage=1
00:10:59.013  		--rc genhtml_legend=1
00:10:59.013  		--rc geninfo_all_blocks=1
00:10:59.013  		--rc geninfo_unexecuted_blocks=1
00:10:59.013  		
00:10:59.013  		'
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:59.013  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:59.013  		--rc genhtml_branch_coverage=1
00:10:59.013  		--rc genhtml_function_coverage=1
00:10:59.013  		--rc genhtml_legend=1
00:10:59.013  		--rc geninfo_all_blocks=1
00:10:59.013  		--rc geninfo_unexecuted_blocks=1
00:10:59.013  		
00:10:59.013  		'
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:59.013  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:59.013  		--rc genhtml_branch_coverage=1
00:10:59.013  		--rc genhtml_function_coverage=1
00:10:59.013  		--rc genhtml_legend=1
00:10:59.013  		--rc geninfo_all_blocks=1
00:10:59.013  		--rc geninfo_unexecuted_blocks=1
00:10:59.013  		
00:10:59.013  		'
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:10:59.013     18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:10:59.013     18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:10:59.013     18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob
00:10:59.013     18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:10:59.013     18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:10:59.013     18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:10:59.013      18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:59.013      18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:59.013      18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:59.013      18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH
00:10:59.013      18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:10:59.013  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:59.013    18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:10:59.013  Cannot find device "nvmf_init_br"
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:10:59.013  Cannot find device "nvmf_init_br2"
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:10:59.013  Cannot find device "nvmf_tgt_br"
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:10:59.013  Cannot find device "nvmf_tgt_br2"
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:10:59.013  Cannot find device "nvmf_init_br"
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true
00:10:59.013   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:10:59.272  Cannot find device "nvmf_init_br2"
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:10:59.272  Cannot find device "nvmf_tgt_br"
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:10:59.272  Cannot find device "nvmf_tgt_br2"
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:10:59.272  Cannot find device "nvmf_br"
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:10:59.272  Cannot find device "nvmf_init_if"
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:10:59.272  Cannot find device "nvmf_init_if2"
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:10:59.272  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:10:59.272  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:10:59.272   18:55:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:10:59.272   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:10:59.272   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:10:59.272   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:10:59.272   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:10:59.272   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:10:59.272   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:10:59.272   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:10:59.272   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:10:59.272   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:10:59.272   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:10:59.272   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:10:59.272   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:10:59.273   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:10:59.532  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:10:59.532  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms
00:10:59.532  
00:10:59.532  --- 10.0.0.3 ping statistics ---
00:10:59.532  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:59.532  rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:10:59.532  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:10:59.532  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms
00:10:59.532  
00:10:59.532  --- 10.0.0.4 ping statistics ---
00:10:59.532  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:59.532  rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:10:59.532  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:10:59.532  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms
00:10:59.532  
00:10:59.532  --- 10.0.0.1 ping statistics ---
00:10:59.532  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:59.532  rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:10:59.532  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:10:59.532  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms
00:10:59.532  
00:10:59.532  --- 10.0.0.2 ping statistics ---
00:10:59.532  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:59.532  rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=84073
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 84073
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 84073 ']'
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:59.532  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:59.532   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:10:59.532  [2024-12-13 18:55:31.208967] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:10:59.532  [2024-12-13 18:55:31.209056] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:10:59.792  [2024-12-13 18:55:31.358157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:10:59.792  [2024-12-13 18:55:31.394623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:10:59.792  [2024-12-13 18:55:31.394696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:10:59.792  [2024-12-13 18:55:31.394722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:10:59.792  [2024-12-13 18:55:31.394730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:10:59.792  [2024-12-13 18:55:31.394737] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:10:59.792  [2024-12-13 18:55:31.395995] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:10:59.792  [2024-12-13 18:55:31.396152] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:10:59.792  [2024-12-13 18:55:31.396275] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:10:59.792  [2024-12-13 18:55:31.396275] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:10:59.792   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:59.792   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0
00:10:59.792   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:10:59.792   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:59.792   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:10:59.792   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:10:59.792   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:10:59.792   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:59.792   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:10:59.792  [2024-12-13 18:55:31.573836] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:10:59.792   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:59.792   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:10:59.792   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:59.792   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:11:00.051  Malloc0
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:11:00.051  [2024-12-13 18:55:31.636271] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems'
00:11:00.051  test case1: single bdev can't be used in multiple subsystems
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:00.051   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:11:00.051  [2024-12-13 18:55:31.660115] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target
00:11:00.051  [2024-12-13 18:55:31.660165] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1
00:11:00.051  [2024-12-13 18:55:31.660193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:11:00.051  2024/12/13 18:55:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:11:00.051  request:
00:11:00.051  {
00:11:00.051  "method": "nvmf_subsystem_add_ns",
00:11:00.051  "params": {
00:11:00.052  "nqn": "nqn.2016-06.io.spdk:cnode2",
00:11:00.052  "namespace": {
00:11:00.052  "bdev_name": "Malloc0",
00:11:00.052  "no_auto_visible": false,
00:11:00.052  "hide_metadata": false
00:11:00.052  }
00:11:00.052  }
00:11:00.052  }
00:11:00.052  Got JSON-RPC error response
00:11:00.052  GoRPCClient: error on JSON-RPC call
00:11:00.052   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:11:00.052   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1
00:11:00.052   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']'
00:11:00.052   Adding namespace failed - expected result.
00:11:00.052   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.'
00:11:00.052  test case2: host connect to nvmf target in multiple paths
00:11:00.052   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths'
00:11:00.052   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421
00:11:00.052   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:00.052   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:11:00.052  [2024-12-13 18:55:31.676210] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 ***
00:11:00.052   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:00.052   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420
00:11:00.052   18:55:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421
00:11:00.311   18:55:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME
00:11:00.311   18:55:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0
00:11:00.311   18:55:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:11:00.311   18:55:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:11:00.311   18:55:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2
00:11:02.215   18:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:11:02.215    18:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:11:02.215    18:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:11:02.474   18:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:11:02.474   18:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:11:02.474   18:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0
00:11:02.474   18:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v
00:11:02.474  [global]
00:11:02.474  thread=1
00:11:02.474  invalidate=1
00:11:02.474  rw=write
00:11:02.474  time_based=1
00:11:02.474  runtime=1
00:11:02.474  ioengine=libaio
00:11:02.474  direct=1
00:11:02.474  bs=4096
00:11:02.474  iodepth=1
00:11:02.474  norandommap=0
00:11:02.474  numjobs=1
00:11:02.474  
00:11:02.474  verify_dump=1
00:11:02.474  verify_backlog=512
00:11:02.474  verify_state_save=0
00:11:02.474  do_verify=1
00:11:02.474  verify=crc32c-intel
00:11:02.474  [job0]
00:11:02.474  filename=/dev/nvme0n1
00:11:02.474  Could not set queue depth (nvme0n1)
00:11:02.474  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:02.474  fio-3.35
00:11:02.474  Starting 1 thread
00:11:03.855  
00:11:03.855  job0: (groupid=0, jobs=1): err= 0: pid=84169: Fri Dec 13 18:55:35 2024
00:11:03.855    read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec)
00:11:03.855      slat (nsec): min=11001, max=66520, avg=13952.38, stdev=4163.61
00:11:03.855      clat (usec): min=115, max=616, avg=138.53, stdev=27.36
00:11:03.855       lat (usec): min=127, max=629, avg=152.49, stdev=28.15
00:11:03.855      clat percentiles (usec):
00:11:03.855       |  1.00th=[  120],  5.00th=[  122], 10.00th=[  124], 20.00th=[  126],
00:11:03.855       | 30.00th=[  128], 40.00th=[  130], 50.00th=[  133], 60.00th=[  137],
00:11:03.855       | 70.00th=[  141], 80.00th=[  149], 90.00th=[  159], 95.00th=[  167],
00:11:03.855       | 99.00th=[  188], 99.50th=[  273], 99.90th=[  553], 99.95th=[  562],
00:11:03.855       | 99.99th=[  619]
00:11:03.855    write: IOPS=3715, BW=14.5MiB/s (15.2MB/s)(14.5MiB/1001msec); 0 zone resets
00:11:03.855      slat (nsec): min=16578, max=80108, avg=20653.34, stdev=6003.23
00:11:03.855      clat (usec): min=82, max=481, avg=98.49, stdev=14.27
00:11:03.855       lat (usec): min=99, max=506, avg=119.14, stdev=16.11
00:11:03.855      clat percentiles (usec):
00:11:03.855       |  1.00th=[   85],  5.00th=[   87], 10.00th=[   88], 20.00th=[   90],
00:11:03.855       | 30.00th=[   92], 40.00th=[   93], 50.00th=[   95], 60.00th=[   97],
00:11:03.855       | 70.00th=[  100], 80.00th=[  106], 90.00th=[  117], 95.00th=[  124],
00:11:03.855       | 99.00th=[  141], 99.50th=[  147], 99.90th=[  235], 99.95th=[  269],
00:11:03.855       | 99.99th=[  482]
00:11:03.855     bw (  KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1
00:11:03.855     iops        : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1
00:11:03.855    lat (usec)   : 100=36.09%, 250=63.60%, 500=0.21%, 750=0.10%
00:11:03.855    cpu          : usr=2.30%, sys=9.50%, ctx=7303, majf=0, minf=5
00:11:03.855    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:03.855       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:03.855       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:03.855       issued rwts: total=3584,3719,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:03.855       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:03.855  
00:11:03.855  Run status group 0 (all jobs):
00:11:03.855     READ: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec
00:11:03.855    WRITE: bw=14.5MiB/s (15.2MB/s), 14.5MiB/s-14.5MiB/s (15.2MB/s-15.2MB/s), io=14.5MiB (15.2MB), run=1001-1001msec
00:11:03.855  
00:11:03.855  Disk stats (read/write):
00:11:03.855    nvme0n1: ios=3122/3499, merge=0/0, ticks=457/367, in_queue=824, util=91.28%
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:11:03.855  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s)
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20}
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:11:03.855  rmmod nvme_tcp
00:11:03.855  rmmod nvme_fabrics
00:11:03.855  rmmod nvme_keyring
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 84073 ']'
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 84073
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 84073 ']'
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 84073
00:11:03.855    18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname
00:11:03.855   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:03.855    18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84073
00:11:04.155   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:11:04.155   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:11:04.155  killing process with pid 84073
00:11:04.155   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84073'
00:11:04.155   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 84073
00:11:04.155   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 84073
00:11:04.155   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:11:04.155   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:11:04.155   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:11:04.155   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr
00:11:04.155   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save
00:11:04.155   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:11:04.155   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore
00:11:04.155   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:11:04.155   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:11:04.155   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:11:04.156   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:11:04.438   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:11:04.438   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:11:04.438   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:11:04.438   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:11:04.438   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:11:04.438   18:55:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:11:04.438   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:11:04.438   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:11:04.438   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:11:04.438   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:11:04.438   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:11:04.438   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns
00:11:04.438   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:04.438   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:04.438    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:04.438   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0
00:11:04.438  
00:11:04.438  real	0m5.610s
00:11:04.438  user	0m17.577s
00:11:04.438  sys	0m1.491s
00:11:04.438   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:04.438  ************************************
00:11:04.438  END TEST nvmf_nmic
00:11:04.438  ************************************
00:11:04.438   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:11:04.438   18:55:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp
00:11:04.438   18:55:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:04.438   18:55:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:04.438   18:55:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:11:04.438  ************************************
00:11:04.438  START TEST nvmf_fio_target
00:11:04.438  ************************************
00:11:04.438   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp
00:11:04.698  * Looking for test storage...
00:11:04.698  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:04.698     18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version
00:11:04.698     18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-:
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-:
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<'
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:04.698     18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1
00:11:04.698     18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1
00:11:04.698     18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:04.698     18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1
00:11:04.698     18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2
00:11:04.698     18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2
00:11:04.698     18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:04.698     18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:04.698  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:04.698  		--rc genhtml_branch_coverage=1
00:11:04.698  		--rc genhtml_function_coverage=1
00:11:04.698  		--rc genhtml_legend=1
00:11:04.698  		--rc geninfo_all_blocks=1
00:11:04.698  		--rc geninfo_unexecuted_blocks=1
00:11:04.698  		
00:11:04.698  		'
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:04.698  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:04.698  		--rc genhtml_branch_coverage=1
00:11:04.698  		--rc genhtml_function_coverage=1
00:11:04.698  		--rc genhtml_legend=1
00:11:04.698  		--rc geninfo_all_blocks=1
00:11:04.698  		--rc geninfo_unexecuted_blocks=1
00:11:04.698  		
00:11:04.698  		'
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:04.698  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:04.698  		--rc genhtml_branch_coverage=1
00:11:04.698  		--rc genhtml_function_coverage=1
00:11:04.698  		--rc genhtml_legend=1
00:11:04.698  		--rc geninfo_all_blocks=1
00:11:04.698  		--rc geninfo_unexecuted_blocks=1
00:11:04.698  		
00:11:04.698  		'
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:04.698  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:04.698  		--rc genhtml_branch_coverage=1
00:11:04.698  		--rc genhtml_function_coverage=1
00:11:04.698  		--rc genhtml_legend=1
00:11:04.698  		--rc geninfo_all_blocks=1
00:11:04.698  		--rc geninfo_unexecuted_blocks=1
00:11:04.698  		
00:11:04.698  		'
00:11:04.698   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:11:04.698     18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:11:04.698    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:11:04.699    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:11:04.699    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:11:04.699    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:11:04.699     18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:11:04.699    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:11:04.699    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:11:04.699    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:11:04.699    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:11:04.699    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:11:04.699    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:11:04.699    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:11:04.699     18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob
00:11:04.699     18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:04.699     18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:04.699     18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:04.699      18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:04.699      18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:04.699      18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:04.699      18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH
00:11:04.699      18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:04.699    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0
00:11:04.699    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:11:04.699    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:11:04.699    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:11:04.699    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:11:04.699    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:11:04.699    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:11:04.699  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:11:04.699    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:11:04.699    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:11:04.699    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:04.699    18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:11:04.699  Cannot find device "nvmf_init_br"
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true
00:11:04.699   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:11:04.699  Cannot find device "nvmf_init_br2"
00:11:04.700   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true
00:11:04.700   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:11:04.700  Cannot find device "nvmf_tgt_br"
00:11:04.700   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true
00:11:04.700   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:11:04.700  Cannot find device "nvmf_tgt_br2"
00:11:04.700   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true
00:11:04.700   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:11:04.700  Cannot find device "nvmf_init_br"
00:11:04.700   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true
00:11:04.700   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:11:04.700  Cannot find device "nvmf_init_br2"
00:11:04.700   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true
00:11:04.700   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:11:04.700  Cannot find device "nvmf_tgt_br"
00:11:04.700   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true
00:11:04.700   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:11:04.700  Cannot find device "nvmf_tgt_br2"
00:11:04.700   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true
00:11:04.700   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:11:04.959  Cannot find device "nvmf_br"
00:11:04.959   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true
00:11:04.959   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:11:04.959  Cannot find device "nvmf_init_if"
00:11:04.959   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true
00:11:04.959   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:11:04.959  Cannot find device "nvmf_init_if2"
00:11:04.959   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true
00:11:04.959   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:11:04.959  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:11:04.959   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true
00:11:04.959   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:11:04.959  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:11:04.959   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true
00:11:04.959   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:11:04.959   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:11:04.959   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:11:04.959   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:11:04.959   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:11:04.959   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:11:04.959   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:11:04.959   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:11:04.960  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:11:04.960  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms
00:11:04.960  
00:11:04.960  --- 10.0.0.3 ping statistics ---
00:11:04.960  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:04.960  rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:11:04.960  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:11:04.960  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms
00:11:04.960  
00:11:04.960  --- 10.0.0.4 ping statistics ---
00:11:04.960  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:04.960  rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms
00:11:04.960   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:11:05.218  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:11:05.218  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms
00:11:05.218  
00:11:05.218  --- 10.0.0.1 ping statistics ---
00:11:05.218  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:05.218  rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms
00:11:05.218   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:11:05.218  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:11:05.218  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms
00:11:05.218  
00:11:05.218  --- 10.0.0.2 ping statistics ---
00:11:05.218  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:05.218  rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms
00:11:05.218   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:11:05.218   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0
00:11:05.218   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:11:05.218   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:11:05.218   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:11:05.218   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:11:05.218   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:11:05.218   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:11:05.218   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:11:05.219   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF
00:11:05.219   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:11:05.219   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:05.219   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:11:05.219   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=84405
00:11:05.219   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 84405
00:11:05.219   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 84405 ']'
00:11:05.219   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:05.219   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:05.219  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:05.219   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:05.219   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:05.219   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:11:05.219   18:55:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:11:05.219  [2024-12-13 18:55:36.888785] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:11:05.219  [2024-12-13 18:55:36.888878] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:05.219  [2024-12-13 18:55:37.035372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:11:05.478  [2024-12-13 18:55:37.073819] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:11:05.478  [2024-12-13 18:55:37.074055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:11:05.478  [2024-12-13 18:55:37.074206] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:11:05.478  [2024-12-13 18:55:37.074444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:11:05.478  [2024-12-13 18:55:37.074631] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:11:05.478  [2024-12-13 18:55:37.075848] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:11:05.478  [2024-12-13 18:55:37.075984] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:11:05.478  [2024-12-13 18:55:37.076496] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:11:05.478  [2024-12-13 18:55:37.076501] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:11:05.478   18:55:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:05.478   18:55:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0
00:11:05.478   18:55:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:11:05.478   18:55:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:05.478   18:55:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:11:05.478   18:55:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:11:05.478   18:55:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:11:05.737  [2024-12-13 18:55:37.539033] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:11:05.997    18:55:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:11:06.256   18:55:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 '
00:11:06.256    18:55:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:11:06.516   18:55:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1
00:11:06.516    18:55:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:11:06.774   18:55:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 '
00:11:06.774    18:55:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:11:07.033   18:55:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3
00:11:07.033   18:55:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3'
00:11:07.293    18:55:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:11:07.552   18:55:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 '
00:11:07.552    18:55:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:11:07.811   18:55:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 '
00:11:07.811    18:55:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:11:08.379   18:55:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6
00:11:08.379   18:55:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6'
00:11:08.379   18:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:11:08.638   18:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs
00:11:08.638   18:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:11:08.899   18:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs
00:11:08.899   18:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:11:09.467   18:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:11:09.467  [2024-12-13 18:55:41.217509] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:11:09.467   18:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0
00:11:09.726   18:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0
00:11:09.985   18:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420
00:11:10.244   18:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4
00:11:10.244   18:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0
00:11:10.244   18:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:11:10.244   18:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]]
00:11:10.244   18:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4
00:11:10.244   18:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2
00:11:12.147   18:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:11:12.147    18:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:11:12.147    18:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:11:12.147   18:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4
00:11:12.147   18:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:11:12.147   18:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0
00:11:12.147   18:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v
00:11:12.147  [global]
00:11:12.147  thread=1
00:11:12.147  invalidate=1
00:11:12.147  rw=write
00:11:12.147  time_based=1
00:11:12.147  runtime=1
00:11:12.147  ioengine=libaio
00:11:12.147  direct=1
00:11:12.147  bs=4096
00:11:12.148  iodepth=1
00:11:12.148  norandommap=0
00:11:12.148  numjobs=1
00:11:12.148  
00:11:12.148  verify_dump=1
00:11:12.148  verify_backlog=512
00:11:12.148  verify_state_save=0
00:11:12.148  do_verify=1
00:11:12.148  verify=crc32c-intel
00:11:12.148  [job0]
00:11:12.148  filename=/dev/nvme0n1
00:11:12.148  [job1]
00:11:12.148  filename=/dev/nvme0n2
00:11:12.148  [job2]
00:11:12.148  filename=/dev/nvme0n3
00:11:12.148  [job3]
00:11:12.148  filename=/dev/nvme0n4
00:11:12.406  Could not set queue depth (nvme0n1)
00:11:12.406  Could not set queue depth (nvme0n2)
00:11:12.406  Could not set queue depth (nvme0n3)
00:11:12.406  Could not set queue depth (nvme0n4)
00:11:12.406  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:12.406  job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:12.406  job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:12.406  job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:12.406  fio-3.35
00:11:12.406  Starting 4 threads
00:11:13.784  
00:11:13.784  job0: (groupid=0, jobs=1): err= 0: pid=84690: Fri Dec 13 18:55:45 2024
00:11:13.784    read: IOPS=2240, BW=8963KiB/s (9178kB/s)(8972KiB/1001msec)
00:11:13.784      slat (nsec): min=9441, max=41863, avg=13269.23, stdev=3759.24
00:11:13.784      clat (usec): min=138, max=734, avg=222.52, stdev=62.76
00:11:13.784       lat (usec): min=150, max=751, avg=235.79, stdev=61.74
00:11:13.784      clat percentiles (usec):
00:11:13.784       |  1.00th=[  143],  5.00th=[  149], 10.00th=[  153], 20.00th=[  159],
00:11:13.784       | 30.00th=[  167], 40.00th=[  178], 50.00th=[  231], 60.00th=[  251],
00:11:13.784       | 70.00th=[  265], 80.00th=[  281], 90.00th=[  297], 95.00th=[  330],
00:11:13.784       | 99.00th=[  359], 99.50th=[  371], 99.90th=[  392], 99.95th=[  529],
00:11:13.784       | 99.99th=[  734]
00:11:13.784    write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets
00:11:13.784      slat (nsec): min=10249, max=74653, avg=21035.79, stdev=5937.45
00:11:13.784      clat (usec): min=98, max=313, avg=160.23, stdev=45.73
00:11:13.784       lat (usec): min=116, max=332, avg=181.27, stdev=44.69
00:11:13.784      clat percentiles (usec):
00:11:13.784       |  1.00th=[  109],  5.00th=[  115], 10.00th=[  118], 20.00th=[  123],
00:11:13.784       | 30.00th=[  127], 40.00th=[  131], 50.00th=[  139], 60.00th=[  151],
00:11:13.784       | 70.00th=[  192], 80.00th=[  210], 90.00th=[  231], 95.00th=[  245],
00:11:13.784       | 99.00th=[  273], 99.50th=[  281], 99.90th=[  297], 99.95th=[  302],
00:11:13.784       | 99.99th=[  314]
00:11:13.784     bw (  KiB/s): min=12288, max=12288, per=27.30%, avg=12288.00, stdev= 0.00, samples=1
00:11:13.784     iops        : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1
00:11:13.784    lat (usec)   : 100=0.02%, 250=78.45%, 500=21.49%, 750=0.04%
00:11:13.784    cpu          : usr=1.40%, sys=6.80%, ctx=4803, majf=0, minf=11
00:11:13.784    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:13.784       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:13.784       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:13.784       issued rwts: total=2243,2560,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:13.784       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:13.784  job1: (groupid=0, jobs=1): err= 0: pid=84691: Fri Dec 13 18:55:45 2024
00:11:13.784    read: IOPS=2113, BW=8456KiB/s (8658kB/s)(8464KiB/1001msec)
00:11:13.784      slat (nsec): min=7683, max=57657, avg=14638.75, stdev=3946.02
00:11:13.784      clat (usec): min=131, max=385, avg=218.98, stdev=61.02
00:11:13.784       lat (usec): min=147, max=399, avg=233.62, stdev=59.56
00:11:13.784      clat percentiles (usec):
00:11:13.784       |  1.00th=[  139],  5.00th=[  147], 10.00th=[  151], 20.00th=[  157],
00:11:13.784       | 30.00th=[  163], 40.00th=[  174], 50.00th=[  229], 60.00th=[  251],
00:11:13.784       | 70.00th=[  265], 80.00th=[  277], 90.00th=[  293], 95.00th=[  314],
00:11:13.784       | 99.00th=[  359], 99.50th=[  367], 99.90th=[  379], 99.95th=[  388],
00:11:13.784       | 99.99th=[  388]
00:11:13.784    write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets
00:11:13.784      slat (nsec): min=10450, max=87766, avg=22010.24, stdev=6567.84
00:11:13.784      clat (usec): min=102, max=5762, avg=172.68, stdev=183.96
00:11:13.784       lat (usec): min=120, max=5781, avg=194.69, stdev=183.86
00:11:13.784      clat percentiles (usec):
00:11:13.784       |  1.00th=[  108],  5.00th=[  115], 10.00th=[  118], 20.00th=[  123],
00:11:13.784       | 30.00th=[  127], 40.00th=[  133], 50.00th=[  141], 60.00th=[  157],
00:11:13.784       | 70.00th=[  198], 80.00th=[  215], 90.00th=[  235], 95.00th=[  249],
00:11:13.784       | 99.00th=[  281], 99.50th=[  519], 99.90th=[ 3458], 99.95th=[ 4490],
00:11:13.784       | 99.99th=[ 5735]
00:11:13.784     bw (  KiB/s): min=12288, max=12288, per=27.30%, avg=12288.00, stdev= 0.00, samples=1
00:11:13.784     iops        : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1
00:11:13.784    lat (usec)   : 250=78.85%, 500=20.87%, 750=0.09%
00:11:13.784    lat (msec)   : 2=0.11%, 4=0.04%, 10=0.04%
00:11:13.784    cpu          : usr=2.00%, sys=6.50%, ctx=4678, majf=0, minf=7
00:11:13.784    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:13.784       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:13.784       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:13.784       issued rwts: total=2116,2560,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:13.784       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:13.784  job2: (groupid=0, jobs=1): err= 0: pid=84692: Fri Dec 13 18:55:45 2024
00:11:13.784    read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec)
00:11:13.784      slat (nsec): min=11279, max=45653, avg=13813.80, stdev=3800.81
00:11:13.784      clat (usec): min=149, max=497, avg=181.47, stdev=18.10
00:11:13.784       lat (usec): min=161, max=510, avg=195.28, stdev=18.42
00:11:13.784      clat percentiles (usec):
00:11:13.784       |  1.00th=[  155],  5.00th=[  159], 10.00th=[  163], 20.00th=[  167],
00:11:13.784       | 30.00th=[  172], 40.00th=[  174], 50.00th=[  180], 60.00th=[  184],
00:11:13.784       | 70.00th=[  190], 80.00th=[  196], 90.00th=[  204], 95.00th=[  210],
00:11:13.784       | 99.00th=[  225], 99.50th=[  229], 99.90th=[  258], 99.95th=[  453],
00:11:13.784       | 99.99th=[  498]
00:11:13.784    write: IOPS=3067, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets
00:11:13.784      slat (nsec): min=16735, max=79880, avg=21037.76, stdev=5531.40
00:11:13.784      clat (usec): min=112, max=208, avg=139.24, stdev=14.29
00:11:13.784       lat (usec): min=129, max=271, avg=160.28, stdev=15.83
00:11:13.784      clat percentiles (usec):
00:11:13.784       |  1.00th=[  117],  5.00th=[  120], 10.00th=[  123], 20.00th=[  127],
00:11:13.784       | 30.00th=[  130], 40.00th=[  135], 50.00th=[  137], 60.00th=[  141],
00:11:13.784       | 70.00th=[  147], 80.00th=[  153], 90.00th=[  161], 95.00th=[  167],
00:11:13.784       | 99.00th=[  176], 99.50th=[  178], 99.90th=[  192], 99.95th=[  192],
00:11:13.784       | 99.99th=[  210]
00:11:13.784     bw (  KiB/s): min=12288, max=12288, per=27.30%, avg=12288.00, stdev= 0.00, samples=1
00:11:13.784     iops        : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1
00:11:13.784    lat (usec)   : 250=99.95%, 500=0.05%
00:11:13.784    cpu          : usr=2.50%, sys=6.80%, ctx=5631, majf=0, minf=11
00:11:13.784    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:13.785       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:13.785       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:13.785       issued rwts: total=2560,3071,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:13.785       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:13.785  job3: (groupid=0, jobs=1): err= 0: pid=84693: Fri Dec 13 18:55:45 2024
00:11:13.785    read: IOPS=2593, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec)
00:11:13.785      slat (nsec): min=10933, max=65550, avg=15092.21, stdev=4658.11
00:11:13.785      clat (usec): min=146, max=2139, avg=177.65, stdev=56.38
00:11:13.785       lat (usec): min=158, max=2152, avg=192.74, stdev=56.63
00:11:13.785      clat percentiles (usec):
00:11:13.785       |  1.00th=[  151],  5.00th=[  157], 10.00th=[  159], 20.00th=[  163],
00:11:13.785       | 30.00th=[  167], 40.00th=[  169], 50.00th=[  174], 60.00th=[  178],
00:11:13.785       | 70.00th=[  184], 80.00th=[  190], 90.00th=[  198], 95.00th=[  202],
00:11:13.785       | 99.00th=[  212], 99.50th=[  219], 99.90th=[  701], 99.95th=[ 2008],
00:11:13.785       | 99.99th=[ 2147]
00:11:13.785    write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets
00:11:13.785      slat (nsec): min=16700, max=82012, avg=22020.58, stdev=5546.43
00:11:13.785      clat (usec): min=104, max=198, avg=137.79, stdev=13.26
00:11:13.785       lat (usec): min=122, max=253, avg=159.81, stdev=14.77
00:11:13.785      clat percentiles (usec):
00:11:13.785       |  1.00th=[  116],  5.00th=[  121], 10.00th=[  123], 20.00th=[  127],
00:11:13.785       | 30.00th=[  130], 40.00th=[  133], 50.00th=[  135], 60.00th=[  139],
00:11:13.785       | 70.00th=[  143], 80.00th=[  151], 90.00th=[  157], 95.00th=[  163],
00:11:13.785       | 99.00th=[  174], 99.50th=[  176], 99.90th=[  182], 99.95th=[  190],
00:11:13.785       | 99.99th=[  198]
00:11:13.785     bw (  KiB/s): min=12288, max=12288, per=27.30%, avg=12288.00, stdev= 0.00, samples=1
00:11:13.785     iops        : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1
00:11:13.785    lat (usec)   : 250=99.89%, 500=0.04%, 750=0.04%
00:11:13.785    lat (msec)   : 4=0.04%
00:11:13.785    cpu          : usr=2.30%, sys=7.90%, ctx=5668, majf=0, minf=7
00:11:13.785    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:13.785       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:13.785       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:13.785       issued rwts: total=2596,3072,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:13.785       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:13.785  
00:11:13.785  Run status group 0 (all jobs):
00:11:13.785     READ: bw=37.1MiB/s (38.9MB/s), 8456KiB/s-10.1MiB/s (8658kB/s-10.6MB/s), io=37.2MiB (39.0MB), run=1001-1001msec
00:11:13.785    WRITE: bw=44.0MiB/s (46.1MB/s), 9.99MiB/s-12.0MiB/s (10.5MB/s-12.6MB/s), io=44.0MiB (46.1MB), run=1001-1001msec
00:11:13.785  
00:11:13.785  Disk stats (read/write):
00:11:13.785    nvme0n1: ios=2098/2211, merge=0/0, ticks=483/347, in_queue=830, util=88.28%
00:11:13.785    nvme0n2: ios=2079/2048, merge=0/0, ticks=469/334, in_queue=803, util=87.84%
00:11:13.785    nvme0n3: ios=2259/2560, merge=0/0, ticks=429/371, in_queue=800, util=89.13%
00:11:13.785    nvme0n4: ios=2282/2560, merge=0/0, ticks=423/386, in_queue=809, util=89.69%
00:11:13.785   18:55:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v
00:11:13.785  [global]
00:11:13.785  thread=1
00:11:13.785  invalidate=1
00:11:13.785  rw=randwrite
00:11:13.785  time_based=1
00:11:13.785  runtime=1
00:11:13.785  ioengine=libaio
00:11:13.785  direct=1
00:11:13.785  bs=4096
00:11:13.785  iodepth=1
00:11:13.785  norandommap=0
00:11:13.785  numjobs=1
00:11:13.785  
00:11:13.785  verify_dump=1
00:11:13.785  verify_backlog=512
00:11:13.785  verify_state_save=0
00:11:13.785  do_verify=1
00:11:13.785  verify=crc32c-intel
00:11:13.785  [job0]
00:11:13.785  filename=/dev/nvme0n1
00:11:13.785  [job1]
00:11:13.785  filename=/dev/nvme0n2
00:11:13.785  [job2]
00:11:13.785  filename=/dev/nvme0n3
00:11:13.785  [job3]
00:11:13.785  filename=/dev/nvme0n4
00:11:13.785  Could not set queue depth (nvme0n1)
00:11:13.785  Could not set queue depth (nvme0n2)
00:11:13.785  Could not set queue depth (nvme0n3)
00:11:13.785  Could not set queue depth (nvme0n4)
00:11:13.785  job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:13.785  job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:13.785  job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:13.785  job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:13.785  fio-3.35
00:11:13.785  Starting 4 threads
00:11:15.164  
00:11:15.164  job0: (groupid=0, jobs=1): err= 0: pid=84746: Fri Dec 13 18:55:46 2024
00:11:15.164    read: IOPS=2690, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec)
00:11:15.164      slat (nsec): min=11247, max=38779, avg=12962.75, stdev=3014.36
00:11:15.164      clat (usec): min=135, max=3558, avg=177.28, stdev=95.53
00:11:15.164       lat (usec): min=147, max=3579, avg=190.25, stdev=95.95
00:11:15.164      clat percentiles (usec):
00:11:15.164       |  1.00th=[  141],  5.00th=[  147], 10.00th=[  149], 20.00th=[  153],
00:11:15.164       | 30.00th=[  157], 40.00th=[  159], 50.00th=[  163], 60.00th=[  167],
00:11:15.164       | 70.00th=[  172], 80.00th=[  180], 90.00th=[  215], 95.00th=[  262],
00:11:15.164       | 99.00th=[  343], 99.50th=[  363], 99.90th=[ 1778], 99.95th=[ 1844],
00:11:15.164       | 99.99th=[ 3556]
00:11:15.164    write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets
00:11:15.164      slat (nsec): min=10165, max=99954, avg=18862.64, stdev=5271.46
00:11:15.164      clat (usec): min=97, max=1984, avg=137.38, stdev=48.60
00:11:15.164       lat (usec): min=116, max=2002, avg=156.25, stdev=48.56
00:11:15.164      clat percentiles (usec):
00:11:15.164       |  1.00th=[  104],  5.00th=[  109], 10.00th=[  112], 20.00th=[  116],
00:11:15.164       | 30.00th=[  119], 40.00th=[  122], 50.00th=[  125], 60.00th=[  129],
00:11:15.164       | 70.00th=[  135], 80.00th=[  145], 90.00th=[  194], 95.00th=[  225],
00:11:15.164       | 99.00th=[  260], 99.50th=[  269], 99.90th=[  371], 99.95th=[  529],
00:11:15.164       | 99.99th=[ 1991]
00:11:15.164     bw (  KiB/s): min=12288, max=12288, per=30.52%, avg=12288.00, stdev= 0.00, samples=1
00:11:15.164     iops        : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1
00:11:15.164    lat (usec)   : 100=0.07%, 250=96.06%, 500=3.68%, 750=0.07%, 1000=0.03%
00:11:15.164    lat (msec)   : 2=0.07%, 4=0.02%
00:11:15.164    cpu          : usr=1.70%, sys=7.10%, ctx=5766, majf=0, minf=7
00:11:15.164    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:15.164       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:15.164       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:15.164       issued rwts: total=2693,3072,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:15.164       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:15.164  job1: (groupid=0, jobs=1): err= 0: pid=84747: Fri Dec 13 18:55:46 2024
00:11:15.164    read: IOPS=1590, BW=6362KiB/s (6514kB/s)(6368KiB/1001msec)
00:11:15.164      slat (usec): min=14, max=108, avg=20.14, stdev= 5.21
00:11:15.164      clat (usec): min=142, max=1700, avg=282.93, stdev=46.07
00:11:15.164       lat (usec): min=161, max=1722, avg=303.07, stdev=47.00
00:11:15.164      clat percentiles (usec):
00:11:15.164       |  1.00th=[  233],  5.00th=[  247], 10.00th=[  253], 20.00th=[  262],
00:11:15.164       | 30.00th=[  269], 40.00th=[  277], 50.00th=[  281], 60.00th=[  289],
00:11:15.164       | 70.00th=[  293], 80.00th=[  297], 90.00th=[  310], 95.00th=[  318],
00:11:15.164       | 99.00th=[  363], 99.50th=[  429], 99.90th=[  619], 99.95th=[ 1696],
00:11:15.164       | 99.99th=[ 1696]
00:11:15.164    write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets
00:11:15.164      slat (usec): min=20, max=111, avg=27.46, stdev= 6.54
00:11:15.164      clat (usec): min=99, max=7108, avg=221.67, stdev=161.71
00:11:15.164       lat (usec): min=127, max=7133, avg=249.13, stdev=161.87
00:11:15.164      clat percentiles (usec):
00:11:15.164       |  1.00th=[  117],  5.00th=[  182], 10.00th=[  190], 20.00th=[  200],
00:11:15.164       | 30.00th=[  204], 40.00th=[  210], 50.00th=[  217], 60.00th=[  223],
00:11:15.164       | 70.00th=[  227], 80.00th=[  235], 90.00th=[  245], 95.00th=[  253],
00:11:15.164       | 99.00th=[  375], 99.50th=[  523], 99.90th=[ 1106], 99.95th=[ 1565],
00:11:15.164       | 99.99th=[ 7111]
00:11:15.164     bw (  KiB/s): min= 8192, max= 8192, per=20.35%, avg=8192.00, stdev= 0.00, samples=1
00:11:15.164     iops        : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1
00:11:15.164    lat (usec)   : 100=0.03%, 250=55.25%, 500=44.26%, 750=0.30%, 1000=0.05%
00:11:15.164    lat (msec)   : 2=0.08%, 10=0.03%
00:11:15.164    cpu          : usr=2.20%, sys=6.10%, ctx=3655, majf=0, minf=15
00:11:15.164    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:15.164       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:15.164       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:15.164       issued rwts: total=1592,2048,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:15.164       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:15.164  job2: (groupid=0, jobs=1): err= 0: pid=84748: Fri Dec 13 18:55:46 2024
00:11:15.164    read: IOPS=1594, BW=6378KiB/s (6531kB/s)(6384KiB/1001msec)
00:11:15.165      slat (nsec): min=13480, max=60464, avg=16208.91, stdev=3849.55
00:11:15.165      clat (usec): min=149, max=824, avg=289.42, stdev=39.97
00:11:15.165       lat (usec): min=164, max=843, avg=305.63, stdev=40.83
00:11:15.165      clat percentiles (usec):
00:11:15.165       |  1.00th=[  188],  5.00th=[  249], 10.00th=[  260], 20.00th=[  269],
00:11:15.165       | 30.00th=[  277], 40.00th=[  281], 50.00th=[  285], 60.00th=[  293],
00:11:15.165       | 70.00th=[  297], 80.00th=[  306], 90.00th=[  314], 95.00th=[  330],
00:11:15.165       | 99.00th=[  416], 99.50th=[  523], 99.90th=[  742], 99.95th=[  824],
00:11:15.165       | 99.99th=[  824]
00:11:15.165    write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets
00:11:15.165      slat (nsec): min=20337, max=85239, avg=26256.78, stdev=5255.48
00:11:15.165      clat (usec): min=106, max=2105, avg=220.67, stdev=66.07
00:11:15.165       lat (usec): min=127, max=2130, avg=246.93, stdev=66.29
00:11:15.165      clat percentiles (usec):
00:11:15.165       |  1.00th=[  127],  5.00th=[  186], 10.00th=[  192], 20.00th=[  202],
00:11:15.165       | 30.00th=[  206], 40.00th=[  212], 50.00th=[  217], 60.00th=[  223],
00:11:15.165       | 70.00th=[  229], 80.00th=[  237], 90.00th=[  247], 95.00th=[  258],
00:11:15.165       | 99.00th=[  338], 99.50th=[  404], 99.90th=[  865], 99.95th=[ 1844],
00:11:15.165       | 99.99th=[ 2114]
00:11:15.165     bw (  KiB/s): min= 8192, max= 8192, per=20.35%, avg=8192.00, stdev= 0.00, samples=1
00:11:15.165     iops        : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1
00:11:15.165    lat (usec)   : 250=54.06%, 500=45.47%, 750=0.36%, 1000=0.05%
00:11:15.165    lat (msec)   : 2=0.03%, 4=0.03%
00:11:15.165    cpu          : usr=1.80%, sys=5.70%, ctx=3644, majf=0, minf=15
00:11:15.165    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:15.165       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:15.165       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:15.165       issued rwts: total=1596,2048,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:15.165       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:15.165  job3: (groupid=0, jobs=1): err= 0: pid=84749: Fri Dec 13 18:55:46 2024
00:11:15.165    read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec)
00:11:15.165      slat (nsec): min=9595, max=51406, avg=14410.71, stdev=3651.92
00:11:15.165      clat (usec): min=147, max=962, avg=188.41, stdev=42.09
00:11:15.165       lat (usec): min=160, max=978, avg=202.82, stdev=41.82
00:11:15.165      clat percentiles (usec):
00:11:15.165       |  1.00th=[  153],  5.00th=[  157], 10.00th=[  159], 20.00th=[  163],
00:11:15.165       | 30.00th=[  167], 40.00th=[  172], 50.00th=[  176], 60.00th=[  180],
00:11:15.165       | 70.00th=[  186], 80.00th=[  196], 90.00th=[  247], 95.00th=[  281],
00:11:15.165       | 99.00th=[  334], 99.50th=[  351], 99.90th=[  375], 99.95th=[  420],
00:11:15.165       | 99.99th=[  963]
00:11:15.165    write: IOPS=2903, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1001msec); 0 zone resets
00:11:15.165      slat (nsec): min=10073, max=92962, avg=20569.68, stdev=4875.29
00:11:15.165      clat (usec): min=108, max=663, avg=142.33, stdev=28.59
00:11:15.165       lat (usec): min=126, max=680, avg=162.90, stdev=28.23
00:11:15.165      clat percentiles (usec):
00:11:15.165       |  1.00th=[  115],  5.00th=[  119], 10.00th=[  121], 20.00th=[  124],
00:11:15.165       | 30.00th=[  128], 40.00th=[  131], 50.00th=[  135], 60.00th=[  139],
00:11:15.165       | 70.00th=[  145], 80.00th=[  153], 90.00th=[  169], 95.00th=[  210],
00:11:15.165       | 99.00th=[  243], 99.50th=[  255], 99.90th=[  281], 99.95th=[  289],
00:11:15.165       | 99.99th=[  660]
00:11:15.165     bw (  KiB/s): min=12288, max=12288, per=30.52%, avg=12288.00, stdev= 0.00, samples=1
00:11:15.165     iops        : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1
00:11:15.165    lat (usec)   : 250=95.21%, 500=4.76%, 750=0.02%, 1000=0.02%
00:11:15.165    cpu          : usr=1.80%, sys=7.40%, ctx=5467, majf=0, minf=11
00:11:15.165    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:15.165       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:15.165       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:15.165       issued rwts: total=2560,2906,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:15.165       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:15.165  
00:11:15.165  Run status group 0 (all jobs):
00:11:15.165     READ: bw=32.9MiB/s (34.5MB/s), 6362KiB/s-10.5MiB/s (6514kB/s-11.0MB/s), io=33.0MiB (34.6MB), run=1001-1001msec
00:11:15.165    WRITE: bw=39.3MiB/s (41.2MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=39.4MiB (41.3MB), run=1001-1001msec
00:11:15.165  
00:11:15.165  Disk stats (read/write):
00:11:15.165    nvme0n1: ios=2610/2638, merge=0/0, ticks=461/338, in_queue=799, util=88.38%
00:11:15.165    nvme0n2: ios=1585/1543, merge=0/0, ticks=459/357, in_queue=816, util=88.60%
00:11:15.165    nvme0n3: ios=1553/1552, merge=0/0, ticks=490/368, in_queue=858, util=89.81%
00:11:15.165    nvme0n4: ios=2365/2560, merge=0/0, ticks=443/367, in_queue=810, util=89.97%
00:11:15.165   18:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v
00:11:15.165  [global]
00:11:15.165  thread=1
00:11:15.165  invalidate=1
00:11:15.165  rw=write
00:11:15.165  time_based=1
00:11:15.165  runtime=1
00:11:15.165  ioengine=libaio
00:11:15.165  direct=1
00:11:15.165  bs=4096
00:11:15.165  iodepth=128
00:11:15.165  norandommap=0
00:11:15.165  numjobs=1
00:11:15.165  
00:11:15.165  verify_dump=1
00:11:15.165  verify_backlog=512
00:11:15.165  verify_state_save=0
00:11:15.165  do_verify=1
00:11:15.165  verify=crc32c-intel
00:11:15.165  [job0]
00:11:15.165  filename=/dev/nvme0n1
00:11:15.165  [job1]
00:11:15.165  filename=/dev/nvme0n2
00:11:15.165  [job2]
00:11:15.165  filename=/dev/nvme0n3
00:11:15.165  [job3]
00:11:15.165  filename=/dev/nvme0n4
00:11:15.165  Could not set queue depth (nvme0n1)
00:11:15.165  Could not set queue depth (nvme0n2)
00:11:15.165  Could not set queue depth (nvme0n3)
00:11:15.165  Could not set queue depth (nvme0n4)
00:11:15.165  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:11:15.165  job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:11:15.165  job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:11:15.165  job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:11:15.165  fio-3.35
00:11:15.165  Starting 4 threads
00:11:16.544  
00:11:16.544  job0: (groupid=0, jobs=1): err= 0: pid=84809: Fri Dec 13 18:55:48 2024
00:11:16.544    read: IOPS=3520, BW=13.8MiB/s (14.4MB/s)(13.8MiB/1004msec)
00:11:16.544      slat (usec): min=5, max=7082, avg=138.17, stdev=607.37
00:11:16.544      clat (usec): min=1493, max=27954, avg=16922.04, stdev=3364.18
00:11:16.544       lat (usec): min=5317, max=27993, avg=17060.21, stdev=3410.68
00:11:16.544      clat percentiles (usec):
00:11:16.544       |  1.00th=[ 6063],  5.00th=[13042], 10.00th=[14091], 20.00th=[14615],
00:11:16.544       | 30.00th=[15008], 40.00th=[15533], 50.00th=[16319], 60.00th=[16581],
00:11:16.544       | 70.00th=[17433], 80.00th=[20317], 90.00th=[21890], 95.00th=[22938],
00:11:16.544       | 99.00th=[25035], 99.50th=[25297], 99.90th=[26346], 99.95th=[27132],
00:11:16.544       | 99.99th=[27919]
00:11:16.544    write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets
00:11:16.544      slat (usec): min=10, max=4922, avg=134.45, stdev=487.15
00:11:16.544      clat (usec): min=9720, max=31927, avg=18701.26, stdev=5221.52
00:11:16.544       lat (usec): min=9742, max=31950, avg=18835.71, stdev=5260.40
00:11:16.544      clat percentiles (usec):
00:11:16.544       |  1.00th=[10814],  5.00th=[11731], 10.00th=[11863], 20.00th=[12780],
00:11:16.544       | 30.00th=[15401], 40.00th=[16909], 50.00th=[18744], 60.00th=[19268],
00:11:16.544       | 70.00th=[21365], 80.00th=[23987], 90.00th=[26608], 95.00th=[27395],
00:11:16.544       | 99.00th=[29230], 99.50th=[29492], 99.90th=[31851], 99.95th=[31851],
00:11:16.544       | 99.99th=[31851]
00:11:16.544     bw (  KiB/s): min=13296, max=15406, per=27.22%, avg=14351.00, stdev=1492.00, samples=2
00:11:16.544     iops        : min= 3324, max= 3851, avg=3587.50, stdev=372.65, samples=2
00:11:16.544    lat (msec)   : 2=0.01%, 10=0.76%, 20=70.61%, 50=28.61%
00:11:16.544    cpu          : usr=3.59%, sys=10.87%, ctx=456, majf=0, minf=11
00:11:16.544    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1%
00:11:16.544       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:16.544       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:16.544       issued rwts: total=3535,3584,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:16.544       latency   : target=0, window=0, percentile=100.00%, depth=128
00:11:16.544  job1: (groupid=0, jobs=1): err= 0: pid=84810: Fri Dec 13 18:55:48 2024
00:11:16.544    read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec)
00:11:16.544      slat (usec): min=4, max=5022, avg=169.65, stdev=659.34
00:11:16.544      clat (usec): min=12201, max=33465, avg=21837.97, stdev=4248.54
00:11:16.544       lat (usec): min=13535, max=33488, avg=22007.62, stdev=4237.58
00:11:16.544      clat percentiles (usec):
00:11:16.544       |  1.00th=[14484],  5.00th=[16057], 10.00th=[17433], 20.00th=[18744],
00:11:16.544       | 30.00th=[19006], 40.00th=[19792], 50.00th=[21627], 60.00th=[21890],
00:11:16.544       | 70.00th=[22676], 80.00th=[24773], 90.00th=[27919], 95.00th=[31589],
00:11:16.544       | 99.00th=[33424], 99.50th=[33424], 99.90th=[33424], 99.95th=[33424],
00:11:16.544       | 99.99th=[33424]
00:11:16.544    write: IOPS=3490, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1004msec); 0 zone resets
00:11:16.544      slat (usec): min=15, max=5005, avg=129.26, stdev=552.24
00:11:16.544      clat (usec): min=2307, max=30421, avg=16909.56, stdev=3891.61
00:11:16.544       lat (usec): min=5321, max=30444, avg=17038.82, stdev=3876.41
00:11:16.544      clat percentiles (usec):
00:11:16.544       |  1.00th=[10814],  5.00th=[13173], 10.00th=[13435], 20.00th=[13698],
00:11:16.544       | 30.00th=[14091], 40.00th=[14746], 50.00th=[15270], 60.00th=[16909],
00:11:16.544       | 70.00th=[18744], 80.00th=[21103], 90.00th=[21627], 95.00th=[24773],
00:11:16.544       | 99.00th=[27657], 99.50th=[27657], 99.90th=[30278], 99.95th=[30278],
00:11:16.544       | 99.99th=[30540]
00:11:16.544     bw (  KiB/s): min=13442, max=13592, per=25.64%, avg=13517.00, stdev=106.07, samples=2
00:11:16.544     iops        : min= 3360, max= 3398, avg=3379.00, stdev=26.87, samples=2
00:11:16.544    lat (msec)   : 4=0.02%, 10=0.36%, 20=57.57%, 50=42.05%
00:11:16.544    cpu          : usr=3.29%, sys=9.17%, ctx=332, majf=0, minf=12
00:11:16.544    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0%
00:11:16.544       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:16.544       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:16.544       issued rwts: total=3072,3504,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:16.544       latency   : target=0, window=0, percentile=100.00%, depth=128
00:11:16.544  job2: (groupid=0, jobs=1): err= 0: pid=84811: Fri Dec 13 18:55:48 2024
00:11:16.544    read: IOPS=2904, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1003msec)
00:11:16.544      slat (usec): min=5, max=6818, avg=170.59, stdev=855.97
00:11:16.544      clat (usec): min=763, max=24322, avg=21291.84, stdev=2412.30
00:11:16.544       lat (usec): min=5574, max=24337, avg=21462.44, stdev=2272.18
00:11:16.544      clat percentiles (usec):
00:11:16.544       |  1.00th=[ 6194],  5.00th=[17433], 10.00th=[19530], 20.00th=[20579],
00:11:16.544       | 30.00th=[21627], 40.00th=[21890], 50.00th=[21890], 60.00th=[22152],
00:11:16.544       | 70.00th=[22152], 80.00th=[22414], 90.00th=[22676], 95.00th=[23725],
00:11:16.544       | 99.00th=[24249], 99.50th=[24249], 99.90th=[24249], 99.95th=[24249],
00:11:16.544       | 99.99th=[24249]
00:11:16.544    write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets
00:11:16.544      slat (usec): min=14, max=5307, avg=155.65, stdev=732.50
00:11:16.544      clat (usec): min=14031, max=23827, avg=20900.73, stdev=1296.69
00:11:16.544       lat (usec): min=16895, max=23863, avg=21056.38, stdev=1048.95
00:11:16.544      clat percentiles (usec):
00:11:16.544       |  1.00th=[16319],  5.00th=[18482], 10.00th=[19792], 20.00th=[20317],
00:11:16.544       | 30.00th=[20317], 40.00th=[20579], 50.00th=[20841], 60.00th=[21103],
00:11:16.544       | 70.00th=[21365], 80.00th=[21365], 90.00th=[22938], 95.00th=[23200],
00:11:16.544       | 99.00th=[23462], 99.50th=[23725], 99.90th=[23725], 99.95th=[23725],
00:11:16.544       | 99.99th=[23725]
00:11:16.544     bw (  KiB/s): min=12288, max=12312, per=23.33%, avg=12300.00, stdev=16.97, samples=2
00:11:16.544     iops        : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2
00:11:16.544    lat (usec)   : 1000=0.02%
00:11:16.544    lat (msec)   : 10=0.53%, 20=13.52%, 50=85.93%
00:11:16.544    cpu          : usr=2.50%, sys=9.98%, ctx=189, majf=0, minf=17
00:11:16.544    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9%
00:11:16.544       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:16.544       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:16.544       issued rwts: total=2913,3072,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:16.544       latency   : target=0, window=0, percentile=100.00%, depth=128
00:11:16.544  job3: (groupid=0, jobs=1): err= 0: pid=84812: Fri Dec 13 18:55:48 2024
00:11:16.544    read: IOPS=2901, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1004msec)
00:11:16.544      slat (usec): min=6, max=4885, avg=168.50, stdev=710.59
00:11:16.544      clat (usec): min=1270, max=26219, avg=21498.10, stdev=2390.12
00:11:16.544       lat (usec): min=5149, max=26232, avg=21666.60, stdev=2302.98
00:11:16.544      clat percentiles (usec):
00:11:16.544       |  1.00th=[ 6652],  5.00th=[17957], 10.00th=[20841], 20.00th=[21627],
00:11:16.544       | 30.00th=[21890], 40.00th=[21890], 50.00th=[21890], 60.00th=[22152],
00:11:16.544       | 70.00th=[22152], 80.00th=[22414], 90.00th=[22676], 95.00th=[22676],
00:11:16.544       | 99.00th=[25297], 99.50th=[25560], 99.90th=[26084], 99.95th=[26346],
00:11:16.544       | 99.99th=[26346]
00:11:16.544    write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets
00:11:16.544      slat (usec): min=8, max=5021, avg=157.85, stdev=738.21
00:11:16.544      clat (usec): min=14290, max=25434, avg=20838.95, stdev=1286.89
00:11:16.544       lat (usec): min=14334, max=25455, avg=20996.80, stdev=1115.43
00:11:16.544      clat percentiles (usec):
00:11:16.544       |  1.00th=[16319],  5.00th=[19792], 10.00th=[20055], 20.00th=[20317],
00:11:16.544       | 30.00th=[20317], 40.00th=[20579], 50.00th=[20841], 60.00th=[21103],
00:11:16.544       | 70.00th=[21103], 80.00th=[21365], 90.00th=[21627], 95.00th=[23725],
00:11:16.544       | 99.00th=[24773], 99.50th=[25297], 99.90th=[25297], 99.95th=[25560],
00:11:16.544       | 99.99th=[25560]
00:11:16.544     bw (  KiB/s): min=12288, max=12288, per=23.31%, avg=12288.00, stdev= 0.00, samples=2
00:11:16.544     iops        : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2
00:11:16.544    lat (msec)   : 2=0.02%, 10=0.53%, 20=8.09%, 50=91.36%
00:11:16.544    cpu          : usr=3.29%, sys=8.77%, ctx=282, majf=0, minf=9
00:11:16.544    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9%
00:11:16.544       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:16.544       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:16.544       issued rwts: total=2913,3072,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:16.544       latency   : target=0, window=0, percentile=100.00%, depth=128
00:11:16.544  
00:11:16.544  Run status group 0 (all jobs):
00:11:16.544     READ: bw=48.4MiB/s (50.7MB/s), 11.3MiB/s-13.8MiB/s (11.9MB/s-14.4MB/s), io=48.6MiB (50.9MB), run=1003-1004msec
00:11:16.544    WRITE: bw=51.5MiB/s (54.0MB/s), 12.0MiB/s-13.9MiB/s (12.5MB/s-14.6MB/s), io=51.7MiB (54.2MB), run=1003-1004msec
00:11:16.544  
00:11:16.544  Disk stats (read/write):
00:11:16.544    nvme0n1: ios=3121/3095, merge=0/0, ticks=17381/16705, in_queue=34086, util=87.66%
00:11:16.544    nvme0n2: ios=2656/3072, merge=0/0, ticks=14786/10543, in_queue=25329, util=88.07%
00:11:16.544    nvme0n3: ios=2560/2592, merge=0/0, ticks=13224/11622, in_queue=24846, util=89.26%
00:11:16.544    nvme0n4: ios=2560/2599, merge=0/0, ticks=13511/11845, in_queue=25356, util=89.82%
00:11:16.544   18:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v
00:11:16.544  [global]
00:11:16.544  thread=1
00:11:16.544  invalidate=1
00:11:16.544  rw=randwrite
00:11:16.544  time_based=1
00:11:16.544  runtime=1
00:11:16.544  ioengine=libaio
00:11:16.544  direct=1
00:11:16.544  bs=4096
00:11:16.544  iodepth=128
00:11:16.544  norandommap=0
00:11:16.544  numjobs=1
00:11:16.544  
00:11:16.544  verify_dump=1
00:11:16.544  verify_backlog=512
00:11:16.544  verify_state_save=0
00:11:16.544  do_verify=1
00:11:16.544  verify=crc32c-intel
00:11:16.544  [job0]
00:11:16.544  filename=/dev/nvme0n1
00:11:16.544  [job1]
00:11:16.544  filename=/dev/nvme0n2
00:11:16.544  [job2]
00:11:16.544  filename=/dev/nvme0n3
00:11:16.544  [job3]
00:11:16.544  filename=/dev/nvme0n4
00:11:16.544  Could not set queue depth (nvme0n1)
00:11:16.544  Could not set queue depth (nvme0n2)
00:11:16.544  Could not set queue depth (nvme0n3)
00:11:16.544  Could not set queue depth (nvme0n4)
00:11:16.544  job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:11:16.544  job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:11:16.544  job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:11:16.544  job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:11:16.544  fio-3.35
00:11:16.544  Starting 4 threads
00:11:17.923  
00:11:17.923  job0: (groupid=0, jobs=1): err= 0: pid=84871: Fri Dec 13 18:55:49 2024
00:11:17.923    read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec)
00:11:17.923      slat (usec): min=7, max=6013, avg=189.63, stdev=758.70
00:11:17.923      clat (usec): min=16848, max=28524, avg=24237.23, stdev=1945.11
00:11:17.923       lat (usec): min=16869, max=28539, avg=24426.86, stdev=1835.27
00:11:17.923      clat percentiles (usec):
00:11:17.923       |  1.00th=[18482],  5.00th=[20317], 10.00th=[21103], 20.00th=[22414],
00:11:17.923       | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25035],
00:11:17.923       | 70.00th=[25297], 80.00th=[25560], 90.00th=[25822], 95.00th=[26084],
00:11:17.923       | 99.00th=[27919], 99.50th=[28443], 99.90th=[28443], 99.95th=[28443],
00:11:17.923       | 99.99th=[28443]
00:11:17.923    write: IOPS=2732, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1003msec); 0 zone resets
00:11:17.923      slat (usec): min=13, max=9781, avg=179.46, stdev=856.32
00:11:17.923      clat (usec): min=1261, max=28718, avg=23414.41, stdev=3014.43
00:11:17.923       lat (usec): min=5983, max=28742, avg=23593.87, stdev=2900.91
00:11:17.923      clat percentiles (usec):
00:11:17.923       |  1.00th=[ 6718],  5.00th=[18482], 10.00th=[20579], 20.00th=[22938],
00:11:17.923       | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987],
00:11:17.923       | 70.00th=[24249], 80.00th=[24511], 90.00th=[26870], 95.00th=[27395],
00:11:17.923       | 99.00th=[28443], 99.50th=[28705], 99.90th=[28705], 99.95th=[28705],
00:11:17.923       | 99.99th=[28705]
00:11:17.923     bw (  KiB/s): min= 8960, max=11944, per=16.22%, avg=10452.00, stdev=2110.01, samples=2
00:11:17.923     iops        : min= 2240, max= 2986, avg=2613.00, stdev=527.50, samples=2
00:11:17.923    lat (msec)   : 2=0.02%, 10=0.60%, 20=4.66%, 50=94.72%
00:11:17.923    cpu          : usr=2.59%, sys=8.38%, ctx=238, majf=0, minf=15
00:11:17.923    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8%
00:11:17.923       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:17.923       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:17.923       issued rwts: total=2560,2741,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:17.923       latency   : target=0, window=0, percentile=100.00%, depth=128
00:11:17.923  job1: (groupid=0, jobs=1): err= 0: pid=84872: Fri Dec 13 18:55:49 2024
00:11:17.923    read: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(21.8MiB/1001msec)
00:11:17.923      slat (usec): min=5, max=3508, avg=87.00, stdev=457.59
00:11:17.923      clat (usec): min=455, max=15200, avg=11469.20, stdev=1131.79
00:11:17.923       lat (usec): min=3395, max=15480, avg=11556.19, stdev=1185.00
00:11:17.923      clat percentiles (usec):
00:11:17.923       |  1.00th=[ 7439],  5.00th=[ 9634], 10.00th=[10814], 20.00th=[11207],
00:11:17.923       | 30.00th=[11338], 40.00th=[11338], 50.00th=[11469], 60.00th=[11600],
00:11:17.923       | 70.00th=[11731], 80.00th=[11863], 90.00th=[12387], 95.00th=[12911],
00:11:17.923       | 99.00th=[14484], 99.50th=[14746], 99.90th=[14877], 99.95th=[15139],
00:11:17.923       | 99.99th=[15139]
00:11:17.923    write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets
00:11:17.923      slat (usec): min=10, max=3416, avg=84.03, stdev=397.42
00:11:17.923      clat (usec): min=7738, max=15386, avg=11101.34, stdev=1209.98
00:11:17.923       lat (usec): min=7761, max=15410, avg=11185.37, stdev=1190.02
00:11:17.923      clat percentiles (usec):
00:11:17.923       |  1.00th=[ 8455],  5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9896],
00:11:17.923       | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11600],
00:11:17.923       | 70.00th=[11731], 80.00th=[11863], 90.00th=[12125], 95.00th=[12387],
00:11:17.923       | 99.00th=[13829], 99.50th=[14484], 99.90th=[15270], 99.95th=[15401],
00:11:17.923       | 99.99th=[15401]
00:11:17.923     bw (  KiB/s): min=21504, max=23552, per=34.95%, avg=22528.00, stdev=1448.15, samples=2
00:11:17.923     iops        : min= 5376, max= 5888, avg=5632.00, stdev=362.04, samples=2
00:11:17.923    lat (usec)   : 500=0.01%
00:11:17.923    lat (msec)   : 4=0.29%, 10=13.22%, 20=86.48%
00:11:17.923    cpu          : usr=4.60%, sys=14.60%, ctx=417, majf=0, minf=10
00:11:17.923    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4%
00:11:17.923       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:17.923       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:17.923       issued rwts: total=5582,5632,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:17.923       latency   : target=0, window=0, percentile=100.00%, depth=128
00:11:17.923  job2: (groupid=0, jobs=1): err= 0: pid=84873: Fri Dec 13 18:55:49 2024
00:11:17.923    read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec)
00:11:17.923      slat (usec): min=8, max=3161, avg=102.13, stdev=482.54
00:11:17.923      clat (usec): min=10205, max=16329, avg=13409.25, stdev=792.13
00:11:17.923       lat (usec): min=10717, max=17649, avg=13511.38, stdev=654.71
00:11:17.923      clat percentiles (usec):
00:11:17.923       |  1.00th=[10683],  5.00th=[11207], 10.00th=[12911], 20.00th=[13173],
00:11:17.923       | 30.00th=[13304], 40.00th=[13435], 50.00th=[13566], 60.00th=[13566],
00:11:17.923       | 70.00th=[13698], 80.00th=[13829], 90.00th=[14091], 95.00th=[14222],
00:11:17.923       | 99.00th=[15401], 99.50th=[15401], 99.90th=[15926], 99.95th=[15926],
00:11:17.923       | 99.99th=[16319]
00:11:17.923    write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(19.9MiB/1002msec); 0 zone resets
00:11:17.923      slat (usec): min=11, max=3924, avg=96.69, stdev=420.89
00:11:17.923      clat (usec): min=300, max=15989, avg=12703.52, stdev=1648.98
00:11:17.923       lat (usec): min=3042, max=16013, avg=12800.21, stdev=1647.17
00:11:17.923      clat percentiles (usec):
00:11:17.924       |  1.00th=[ 6783],  5.00th=[10945], 10.00th=[11076], 20.00th=[11338],
00:11:17.924       | 30.00th=[11469], 40.00th=[11863], 50.00th=[13435], 60.00th=[13698],
00:11:17.924       | 70.00th=[13829], 80.00th=[14091], 90.00th=[14353], 95.00th=[14615],
00:11:17.924       | 99.00th=[15139], 99.50th=[15401], 99.90th=[15926], 99.95th=[15926],
00:11:17.924       | 99.99th=[15926]
00:11:17.924     bw (  KiB/s): min=19256, max=20480, per=30.82%, avg=19868.00, stdev=865.50, samples=2
00:11:17.924     iops        : min= 4814, max= 5120, avg=4967.00, stdev=216.37, samples=2
00:11:17.924    lat (usec)   : 500=0.01%
00:11:17.924    lat (msec)   : 4=0.37%, 10=0.42%, 20=99.20%
00:11:17.924    cpu          : usr=4.60%, sys=12.89%, ctx=484, majf=0, minf=15
00:11:17.924    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4%
00:11:17.924       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:17.924       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:17.924       issued rwts: total=4608,5095,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:17.924       latency   : target=0, window=0, percentile=100.00%, depth=128
00:11:17.924  job3: (groupid=0, jobs=1): err= 0: pid=84874: Fri Dec 13 18:55:49 2024
00:11:17.924    read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec)
00:11:17.924      slat (usec): min=7, max=6175, avg=192.83, stdev=774.58
00:11:17.924      clat (usec): min=17299, max=30561, avg=24906.81, stdev=1573.24
00:11:17.924       lat (usec): min=18181, max=30619, avg=25099.64, stdev=1391.71
00:11:17.924      clat percentiles (usec):
00:11:17.924       |  1.00th=[19530],  5.00th=[22152], 10.00th=[23200], 20.00th=[24511],
00:11:17.924       | 30.00th=[24773], 40.00th=[25035], 50.00th=[25035], 60.00th=[25035],
00:11:17.924       | 70.00th=[25297], 80.00th=[25560], 90.00th=[25822], 95.00th=[27657],
00:11:17.924       | 99.00th=[29754], 99.50th=[30278], 99.90th=[30540], 99.95th=[30540],
00:11:17.924       | 99.99th=[30540]
00:11:17.924    write: IOPS=2700, BW=10.5MiB/s (11.1MB/s)(10.6MiB/1004msec); 0 zone resets
00:11:17.924      slat (usec): min=13, max=5920, avg=178.21, stdev=844.76
00:11:17.924      clat (usec): min=3305, max=28651, avg=23167.07, stdev=2934.81
00:11:17.924       lat (usec): min=3328, max=28675, avg=23345.27, stdev=2838.62
00:11:17.924      clat percentiles (usec):
00:11:17.924       |  1.00th=[ 8979],  5.00th=[18482], 10.00th=[21103], 20.00th=[22938],
00:11:17.924       | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725],
00:11:17.924       | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[26608],
00:11:17.924       | 99.00th=[27919], 99.50th=[28181], 99.90th=[28705], 99.95th=[28705],
00:11:17.924       | 99.99th=[28705]
00:11:17.924     bw (  KiB/s): min= 8632, max=12040, per=16.04%, avg=10336.00, stdev=2409.82, samples=2
00:11:17.924     iops        : min= 2158, max= 3010, avg=2584.00, stdev=602.45, samples=2
00:11:17.924    lat (msec)   : 4=0.30%, 10=0.61%, 20=3.72%, 50=95.37%
00:11:17.924    cpu          : usr=2.19%, sys=8.57%, ctx=232, majf=0, minf=13
00:11:17.924    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8%
00:11:17.924       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:17.924       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:17.924       issued rwts: total=2560,2711,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:17.924       latency   : target=0, window=0, percentile=100.00%, depth=128
00:11:17.924  
00:11:17.924  Run status group 0 (all jobs):
00:11:17.924     READ: bw=59.6MiB/s (62.5MB/s), 9.96MiB/s-21.8MiB/s (10.4MB/s-22.8MB/s), io=59.8MiB (62.7MB), run=1001-1004msec
00:11:17.924    WRITE: bw=62.9MiB/s (66.0MB/s), 10.5MiB/s-22.0MiB/s (11.1MB/s-23.0MB/s), io=63.2MiB (66.3MB), run=1001-1004msec
00:11:17.924  
00:11:17.924  Disk stats (read/write):
00:11:17.924    nvme0n1: ios=2097/2452, merge=0/0, ticks=12215/12940, in_queue=25155, util=86.45%
00:11:17.924    nvme0n2: ios=4634/4924, merge=0/0, ticks=15839/14953, in_queue=30792, util=88.40%
00:11:17.924    nvme0n3: ios=4096/4106, merge=0/0, ticks=12551/11636, in_queue=24187, util=89.02%
00:11:17.924    nvme0n4: ios=2048/2407, merge=0/0, ticks=12436/12412, in_queue=24848, util=89.68%
00:11:17.924   18:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync
00:11:17.924   18:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=84887
00:11:17.924   18:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10
00:11:17.924   18:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3
00:11:17.924  [global]
00:11:17.924  thread=1
00:11:17.924  invalidate=1
00:11:17.924  rw=read
00:11:17.924  time_based=1
00:11:17.924  runtime=10
00:11:17.924  ioengine=libaio
00:11:17.924  direct=1
00:11:17.924  bs=4096
00:11:17.924  iodepth=1
00:11:17.924  norandommap=1
00:11:17.924  numjobs=1
00:11:17.924  
00:11:17.924  [job0]
00:11:17.924  filename=/dev/nvme0n1
00:11:17.924  [job1]
00:11:17.924  filename=/dev/nvme0n2
00:11:17.924  [job2]
00:11:17.924  filename=/dev/nvme0n3
00:11:17.924  [job3]
00:11:17.924  filename=/dev/nvme0n4
00:11:17.924  Could not set queue depth (nvme0n1)
00:11:17.924  Could not set queue depth (nvme0n2)
00:11:17.924  Could not set queue depth (nvme0n3)
00:11:17.924  Could not set queue depth (nvme0n4)
00:11:17.924  job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:17.924  job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:17.924  job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:17.924  job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:11:17.924  fio-3.35
00:11:17.924  Starting 4 threads
00:11:21.292   18:55:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0
00:11:21.292  fio: pid=84930, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:11:21.292  fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=66187264, buflen=4096
00:11:21.292   18:55:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0
00:11:21.292  fio: pid=84929, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:11:21.292  fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=68591616, buflen=4096
00:11:21.292   18:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:11:21.292   18:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0
00:11:21.551  fio: pid=84927, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:11:21.551  fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=47845376, buflen=4096
00:11:21.551   18:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:11:21.551   18:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1
00:11:21.810  fio: pid=84928, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:11:21.810  fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=53809152, buflen=4096
00:11:21.810  
00:11:21.810  job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=84927: Fri Dec 13 18:55:53 2024
00:11:21.810    read: IOPS=3337, BW=13.0MiB/s (13.7MB/s)(45.6MiB/3500msec)
00:11:21.810      slat (usec): min=6, max=10719, avg=16.37, stdev=161.89
00:11:21.810      clat (usec): min=123, max=5839, avg=281.98, stdev=113.76
00:11:21.810       lat (usec): min=134, max=10919, avg=298.35, stdev=196.87
00:11:21.810      clat percentiles (usec):
00:11:21.810       |  1.00th=[  149],  5.00th=[  174], 10.00th=[  210], 20.00th=[  269],
00:11:21.810       | 30.00th=[  277], 40.00th=[  285], 50.00th=[  289], 60.00th=[  293],
00:11:21.810       | 70.00th=[  297], 80.00th=[  306], 90.00th=[  314], 95.00th=[  318],
00:11:21.810       | 99.00th=[  343], 99.50th=[  400], 99.90th=[ 1844], 99.95th=[ 2802],
00:11:21.810       | 99.99th=[ 5800]
00:11:21.810     bw (  KiB/s): min=12560, max=14672, per=21.49%, avg=13153.33, stdev=763.28, samples=6
00:11:21.810     iops        : min= 3140, max= 3668, avg=3288.33, stdev=190.82, samples=6
00:11:21.810    lat (usec)   : 250=14.80%, 500=84.89%, 750=0.12%, 1000=0.06%
00:11:21.810    lat (msec)   : 2=0.04%, 4=0.05%, 10=0.03%
00:11:21.810    cpu          : usr=0.80%, sys=3.74%, ctx=11727, majf=0, minf=1
00:11:21.810    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:21.810       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:21.810       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:21.810       issued rwts: total=11682,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:21.810       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:21.810  job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=84928: Fri Dec 13 18:55:53 2024
00:11:21.810    read: IOPS=3483, BW=13.6MiB/s (14.3MB/s)(51.3MiB/3772msec)
00:11:21.810      slat (usec): min=6, max=15915, avg=21.46, stdev=217.81
00:11:21.810      clat (usec): min=121, max=4276, avg=264.13, stdev=83.64
00:11:21.810       lat (usec): min=132, max=16250, avg=285.60, stdev=233.58
00:11:21.810      clat percentiles (usec):
00:11:21.810       |  1.00th=[  129],  5.00th=[  137], 10.00th=[  155], 20.00th=[  231],
00:11:21.810       | 30.00th=[  265], 40.00th=[  273], 50.00th=[  281], 60.00th=[  289],
00:11:21.810       | 70.00th=[  293], 80.00th=[  297], 90.00th=[  306], 95.00th=[  314],
00:11:21.810       | 99.00th=[  330], 99.50th=[  363], 99.90th=[  914], 99.95th=[ 1729],
00:11:21.810       | 99.99th=[ 3163]
00:11:21.810     bw (  KiB/s): min=12912, max=15130, per=21.90%, avg=13408.29, stdev=777.91, samples=7
00:11:21.810     iops        : min= 3228, max= 3782, avg=3352.00, stdev=194.29, samples=7
00:11:21.810    lat (usec)   : 250=22.73%, 500=77.02%, 750=0.08%, 1000=0.08%
00:11:21.810    lat (msec)   : 2=0.05%, 4=0.02%, 10=0.01%
00:11:21.810    cpu          : usr=1.11%, sys=4.91%, ctx=13353, majf=0, minf=2
00:11:21.810    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:21.810       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:21.810       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:21.810       issued rwts: total=13138,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:21.810       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:21.810  job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=84929: Fri Dec 13 18:55:53 2024
00:11:21.810    read: IOPS=5162, BW=20.2MiB/s (21.1MB/s)(65.4MiB/3244msec)
00:11:21.810      slat (usec): min=6, max=8716, avg=14.53, stdev=91.11
00:11:21.810      clat (usec): min=3, max=1976, avg=178.01, stdev=44.14
00:11:21.810       lat (usec): min=151, max=8962, avg=192.54, stdev=101.76
00:11:21.810      clat percentiles (usec):
00:11:21.810       |  1.00th=[  149],  5.00th=[  153], 10.00th=[  155], 20.00th=[  159],
00:11:21.810       | 30.00th=[  163], 40.00th=[  165], 50.00th=[  169], 60.00th=[  174],
00:11:21.810       | 70.00th=[  178], 80.00th=[  186], 90.00th=[  198], 95.00th=[  260],
00:11:21.811       | 99.00th=[  310], 99.50th=[  326], 99.90th=[  478], 99.95th=[  832],
00:11:21.811       | 99.99th=[ 1876]
00:11:21.811     bw (  KiB/s): min=20072, max=21984, per=34.91%, avg=21368.00, stdev=741.77, samples=6
00:11:21.811     iops        : min= 5018, max= 5496, avg=5342.00, stdev=185.44, samples=6
00:11:21.811    lat (usec)   : 4=0.01%, 250=94.30%, 500=5.61%, 750=0.02%, 1000=0.02%
00:11:21.811    lat (msec)   : 2=0.04%
00:11:21.811    cpu          : usr=1.39%, sys=5.70%, ctx=17001, majf=0, minf=2
00:11:21.811    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:21.811       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:21.811       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:21.811       issued rwts: total=16747,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:21.811       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:21.811  job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=84930: Fri Dec 13 18:55:53 2024
00:11:21.811    read: IOPS=5424, BW=21.2MiB/s (22.2MB/s)(63.1MiB/2979msec)
00:11:21.811      slat (nsec): min=10702, max=67724, avg=13365.55, stdev=3280.31
00:11:21.811      clat (usec): min=141, max=2078, avg=169.90, stdev=28.08
00:11:21.811       lat (usec): min=155, max=2101, avg=183.26, stdev=28.32
00:11:21.811      clat percentiles (usec):
00:11:21.811       |  1.00th=[  149],  5.00th=[  153], 10.00th=[  155], 20.00th=[  159],
00:11:21.811       | 30.00th=[  161], 40.00th=[  165], 50.00th=[  167], 60.00th=[  172],
00:11:21.811       | 70.00th=[  176], 80.00th=[  180], 90.00th=[  188], 95.00th=[  192],
00:11:21.811       | 99.00th=[  200], 99.50th=[  206], 99.90th=[  371], 99.95th=[  519],
00:11:21.811       | 99.99th=[ 1991]
00:11:21.811     bw (  KiB/s): min=21208, max=21984, per=35.49%, avg=21724.80, stdev=316.31, samples=5
00:11:21.811     iops        : min= 5302, max= 5496, avg=5431.20, stdev=79.08, samples=5
00:11:21.811    lat (usec)   : 250=99.80%, 500=0.14%, 750=0.04%
00:11:21.811    lat (msec)   : 2=0.01%, 4=0.01%
00:11:21.811    cpu          : usr=1.24%, sys=5.84%, ctx=16162, majf=0, minf=2
00:11:21.811    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:21.811       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:21.811       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:21.811       issued rwts: total=16160,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:21.811       latency   : target=0, window=0, percentile=100.00%, depth=1
00:11:21.811  
00:11:21.811  Run status group 0 (all jobs):
00:11:21.811     READ: bw=59.8MiB/s (62.7MB/s), 13.0MiB/s-21.2MiB/s (13.7MB/s-22.2MB/s), io=225MiB (236MB), run=2979-3772msec
00:11:21.811  
00:11:21.811  Disk stats (read/write):
00:11:21.811    nvme0n1: ios=11135/0, merge=0/0, ticks=3192/0, in_queue=3192, util=95.34%
00:11:21.811    nvme0n2: ios=12156/0, merge=0/0, ticks=3366/0, in_queue=3366, util=95.34%
00:11:21.811    nvme0n3: ios=16336/0, merge=0/0, ticks=2905/0, in_queue=2905, util=96.34%
00:11:21.811    nvme0n4: ios=15571/0, merge=0/0, ticks=2699/0, in_queue=2699, util=96.76%
00:11:21.811   18:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:11:21.811   18:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2
00:11:22.070   18:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:11:22.070   18:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3
00:11:22.328   18:55:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:11:22.328   18:55:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4
00:11:22.587   18:55:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:11:22.587   18:55:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5
00:11:23.156   18:55:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:11:23.156   18:55:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6
00:11:23.414   18:55:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0
00:11:23.414   18:55:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 84887
00:11:23.414   18:55:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4
00:11:23.414   18:55:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:11:23.414  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:23.414   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:11:23.414   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0
00:11:23.414   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:23.415   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:11:23.415   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:23.415   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:11:23.415   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0
00:11:23.415  nvmf hotplug test: fio failed as expected
00:11:23.415   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']'
00:11:23.415   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected'
00:11:23.415   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20}
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:11:23.673  rmmod nvme_tcp
00:11:23.673  rmmod nvme_fabrics
00:11:23.673  rmmod nvme_keyring
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 84405 ']'
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 84405
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 84405 ']'
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 84405
00:11:23.673    18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:23.673    18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84405
00:11:23.673  killing process with pid 84405
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84405'
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 84405
00:11:23.673   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 84405
00:11:23.932   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:11:23.932   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:11:23.932   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:11:23.932   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr
00:11:23.932   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save
00:11:23.932   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:11:23.932   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore
00:11:23.932   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:11:23.932   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:11:23.932   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:11:23.932   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:11:23.932   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:11:23.932   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:11:23.932   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:11:23.932   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:11:23.932   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:11:23.932   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:11:23.932   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:11:23.932   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:11:24.192   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:11:24.192   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:11:24.192   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:11:24.192   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns
00:11:24.192   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:24.192   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:24.192    18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:24.192   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0
00:11:24.192  
00:11:24.192  real	0m19.615s
00:11:24.192  user	1m14.106s
00:11:24.192  sys	0m9.155s
00:11:24.192   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:24.192  ************************************
00:11:24.192   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:11:24.192  END TEST nvmf_fio_target
00:11:24.192  ************************************
00:11:24.192   18:55:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp
00:11:24.192   18:55:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:24.192   18:55:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:24.192   18:55:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:11:24.192  ************************************
00:11:24.192  START TEST nvmf_bdevio
00:11:24.192  ************************************
00:11:24.192   18:55:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp
00:11:24.192  * Looking for test storage...
00:11:24.192  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:11:24.192    18:55:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:24.192     18:55:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version
00:11:24.192     18:55:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:24.452    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:24.452    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:24.452    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:24.452    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:24.452    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-:
00:11:24.452    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1
00:11:24.452    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-:
00:11:24.452    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2
00:11:24.452    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<'
00:11:24.452    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2
00:11:24.452    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1
00:11:24.452    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:24.452    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in
00:11:24.452    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1
00:11:24.452    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:24.452    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:24.452     18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1
00:11:24.452     18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1
00:11:24.452     18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:24.452     18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1
00:11:24.452    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1
00:11:24.452     18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2
00:11:24.452     18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2
00:11:24.452     18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:24.452     18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2
00:11:24.452    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2
00:11:24.452    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:24.452    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:24.452    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0
00:11:24.452    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:24.452    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:24.452  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:24.452  		--rc genhtml_branch_coverage=1
00:11:24.452  		--rc genhtml_function_coverage=1
00:11:24.452  		--rc genhtml_legend=1
00:11:24.452  		--rc geninfo_all_blocks=1
00:11:24.452  		--rc geninfo_unexecuted_blocks=1
00:11:24.452  		
00:11:24.452  		'
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:24.453  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:24.453  		--rc genhtml_branch_coverage=1
00:11:24.453  		--rc genhtml_function_coverage=1
00:11:24.453  		--rc genhtml_legend=1
00:11:24.453  		--rc geninfo_all_blocks=1
00:11:24.453  		--rc geninfo_unexecuted_blocks=1
00:11:24.453  		
00:11:24.453  		'
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:24.453  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:24.453  		--rc genhtml_branch_coverage=1
00:11:24.453  		--rc genhtml_function_coverage=1
00:11:24.453  		--rc genhtml_legend=1
00:11:24.453  		--rc geninfo_all_blocks=1
00:11:24.453  		--rc geninfo_unexecuted_blocks=1
00:11:24.453  		
00:11:24.453  		'
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:24.453  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:24.453  		--rc genhtml_branch_coverage=1
00:11:24.453  		--rc genhtml_function_coverage=1
00:11:24.453  		--rc genhtml_legend=1
00:11:24.453  		--rc geninfo_all_blocks=1
00:11:24.453  		--rc geninfo_unexecuted_blocks=1
00:11:24.453  		
00:11:24.453  		'
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:11:24.453     18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:11:24.453     18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:11:24.453     18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob
00:11:24.453     18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:24.453     18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:24.453     18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:24.453      18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:24.453      18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:24.453      18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:24.453      18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH
00:11:24.453      18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:11:24.453  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:24.453    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:11:24.453   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:11:24.454  Cannot find device "nvmf_init_br"
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:11:24.454  Cannot find device "nvmf_init_br2"
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:11:24.454  Cannot find device "nvmf_tgt_br"
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:11:24.454  Cannot find device "nvmf_tgt_br2"
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:11:24.454  Cannot find device "nvmf_init_br"
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:11:24.454  Cannot find device "nvmf_init_br2"
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:11:24.454  Cannot find device "nvmf_tgt_br"
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:11:24.454  Cannot find device "nvmf_tgt_br2"
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:11:24.454  Cannot find device "nvmf_br"
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:11:24.454  Cannot find device "nvmf_init_if"
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:11:24.454  Cannot find device "nvmf_init_if2"
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:11:24.454  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:11:24.454  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:11:24.454   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:11:24.713  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:11:24.713  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms
00:11:24.713  
00:11:24.713  --- 10.0.0.3 ping statistics ---
00:11:24.713  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:24.713  rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms
00:11:24.713   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:11:24.713  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:11:24.714  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms
00:11:24.714  
00:11:24.714  --- 10.0.0.4 ping statistics ---
00:11:24.714  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:24.714  rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:11:24.714  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:11:24.714  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms
00:11:24.714  
00:11:24.714  --- 10.0.0.1 ping statistics ---
00:11:24.714  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:24.714  rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:11:24.714  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:11:24.714  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms
00:11:24.714  
00:11:24.714  --- 10.0.0.2 ping statistics ---
00:11:24.714  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:24.714  rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=85319
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 85319
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 85319 ']'
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:24.714  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:24.714   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:11:24.714  [2024-12-13 18:55:56.523976] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:11:24.714  [2024-12-13 18:55:56.524602] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:24.973  [2024-12-13 18:55:56.672277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:11:24.973  [2024-12-13 18:55:56.706274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:11:24.973  [2024-12-13 18:55:56.706331] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:11:24.973  [2024-12-13 18:55:56.706342] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:11:24.973  [2024-12-13 18:55:56.706349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:11:24.973  [2024-12-13 18:55:56.706356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:11:24.973  [2024-12-13 18:55:56.707365] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4
00:11:24.973  [2024-12-13 18:55:56.708321] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5
00:11:24.973  [2024-12-13 18:55:56.708458] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6
00:11:24.973  [2024-12-13 18:55:56.708878] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:11:25.232   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:25.232   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0
00:11:25.232   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:11:25.232   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:25.232   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:11:25.232   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:11:25.232   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:11:25.232   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:25.232   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:11:25.232  [2024-12-13 18:55:56.877064] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:11:25.232   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:25.232   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:11:25.232   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:25.232   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:11:25.232  Malloc0
00:11:25.232   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:25.232   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:11:25.232   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:25.233   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:11:25.233   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:25.233   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:11:25.233   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:25.233   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:11:25.233   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:25.233   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:11:25.233   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:25.233   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:11:25.233  [2024-12-13 18:55:56.942976] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:11:25.233   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:25.233   18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62
00:11:25.233    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json
00:11:25.233    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=()
00:11:25.233    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config
00:11:25.233    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:11:25.233    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:11:25.233  {
00:11:25.233    "params": {
00:11:25.233      "name": "Nvme$subsystem",
00:11:25.233      "trtype": "$TEST_TRANSPORT",
00:11:25.233      "traddr": "$NVMF_FIRST_TARGET_IP",
00:11:25.233      "adrfam": "ipv4",
00:11:25.233      "trsvcid": "$NVMF_PORT",
00:11:25.233      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:11:25.233      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:11:25.233      "hdgst": ${hdgst:-false},
00:11:25.233      "ddgst": ${ddgst:-false}
00:11:25.233    },
00:11:25.233    "method": "bdev_nvme_attach_controller"
00:11:25.233  }
00:11:25.233  EOF
00:11:25.233  )")
00:11:25.233     18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat
00:11:25.233    18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq .
00:11:25.233     18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=,
00:11:25.233     18:55:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:11:25.233    "params": {
00:11:25.233      "name": "Nvme1",
00:11:25.233      "trtype": "tcp",
00:11:25.233      "traddr": "10.0.0.3",
00:11:25.233      "adrfam": "ipv4",
00:11:25.233      "trsvcid": "4420",
00:11:25.233      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:11:25.233      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:11:25.233      "hdgst": false,
00:11:25.233      "ddgst": false
00:11:25.233    },
00:11:25.233    "method": "bdev_nvme_attach_controller"
00:11:25.233  }'
00:11:25.233  [2024-12-13 18:55:57.011427] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:11:25.233  [2024-12-13 18:55:57.011533] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85355 ]
00:11:25.492  [2024-12-13 18:55:57.165433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:11:25.492  [2024-12-13 18:55:57.212749] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:11:25.492  [2024-12-13 18:55:57.212888] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:11:25.492  [2024-12-13 18:55:57.213071] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:11:25.751  I/O targets:
00:11:25.751    Nvme1n1: 131072 blocks of 512 bytes (64 MiB)
00:11:25.751  
00:11:25.751  
00:11:25.751       CUnit - A unit testing framework for C - Version 2.1-3
00:11:25.751       http://cunit.sourceforge.net/
00:11:25.751  
00:11:25.751  
00:11:25.751  Suite: bdevio tests on: Nvme1n1
00:11:25.751    Test: blockdev write read block ...passed
00:11:25.751    Test: blockdev write zeroes read block ...passed
00:11:25.751    Test: blockdev write zeroes read no split ...passed
00:11:25.751    Test: blockdev write zeroes read split ...passed
00:11:25.751    Test: blockdev write zeroes read split partial ...passed
00:11:25.751    Test: blockdev reset ...[2024-12-13 18:55:57.509948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller
00:11:25.751  [2024-12-13 18:55:57.510154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xffb1b0 (9): Bad file descriptor
00:11:25.751  [2024-12-13 18:55:57.523437] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful.
00:11:25.751  passed
00:11:25.751    Test: blockdev write read 8 blocks ...passed
00:11:25.751    Test: blockdev write read size > 128k ...passed
00:11:25.751    Test: blockdev write read invalid size ...passed
00:11:25.751    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:25.751    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:25.751    Test: blockdev write read max offset ...passed
00:11:26.011    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:26.011    Test: blockdev writev readv 8 blocks ...passed
00:11:26.011    Test: blockdev writev readv 30 x 1block ...passed
00:11:26.011    Test: blockdev writev readv block ...passed
00:11:26.011    Test: blockdev writev readv size > 128k ...passed
00:11:26.011    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:26.011    Test: blockdev comparev and writev ...[2024-12-13 18:55:57.694532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:11:26.011  [2024-12-13 18:55:57.694589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:11:26.011  [2024-12-13 18:55:57.694626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:11:26.011  [2024-12-13 18:55:57.694637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:11:26.011  [2024-12-13 18:55:57.694986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:11:26.011  [2024-12-13 18:55:57.695013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:11:26.011  [2024-12-13 18:55:57.695030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:11:26.011  [2024-12-13 18:55:57.695040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:11:26.011  [2024-12-13 18:55:57.695349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:11:26.011  [2024-12-13 18:55:57.695370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:11:26.011  [2024-12-13 18:55:57.695387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:11:26.011  [2024-12-13 18:55:57.695396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:11:26.011  [2024-12-13 18:55:57.695699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:11:26.011  [2024-12-13 18:55:57.695722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:11:26.011  [2024-12-13 18:55:57.695739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:11:26.011  [2024-12-13 18:55:57.695749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:11:26.011  passed
00:11:26.011    Test: blockdev nvme passthru rw ...passed
00:11:26.011    Test: blockdev nvme passthru vendor specific ...passed
00:11:26.011    Test: blockdev nvme admin passthru ...[2024-12-13 18:55:57.777507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:11:26.011  [2024-12-13 18:55:57.777540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:11:26.011  [2024-12-13 18:55:57.777673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:11:26.011  [2024-12-13 18:55:57.777689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:11:26.011  [2024-12-13 18:55:57.777814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:11:26.011  [2024-12-13 18:55:57.777829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:11:26.011  [2024-12-13 18:55:57.777954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:11:26.011  [2024-12-13 18:55:57.777970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:11:26.011  passed
00:11:26.271    Test: blockdev copy ...passed
00:11:26.271  
00:11:26.271  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:11:26.271                suites      1      1    n/a      0        0
00:11:26.271                 tests     23     23     23      0        0
00:11:26.271               asserts    152    152    152      0      n/a
00:11:26.271  
00:11:26.271  Elapsed time =    0.870 seconds
00:11:26.271   18:55:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:26.271   18:55:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:26.271   18:55:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:11:26.271   18:55:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:26.271   18:55:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT
00:11:26.271   18:55:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini
00:11:26.271   18:55:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup
00:11:26.271   18:55:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync
00:11:26.271   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:11:26.271   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e
00:11:26.271   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20}
00:11:26.271   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:11:26.271  rmmod nvme_tcp
00:11:26.271  rmmod nvme_fabrics
00:11:26.271  rmmod nvme_keyring
00:11:26.530   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:11:26.530   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e
00:11:26.530   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0
00:11:26.530   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 85319 ']'
00:11:26.530   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 85319
00:11:26.530   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 85319 ']'
00:11:26.530   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 85319
00:11:26.530    18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname
00:11:26.530   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:26.530    18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85319
00:11:26.530  killing process with pid 85319
00:11:26.530   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3
00:11:26.530   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']'
00:11:26.530   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85319'
00:11:26.530   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 85319
00:11:26.530   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 85319
00:11:26.789   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:11:26.789   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:11:26.789   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:11:26.789   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr
00:11:26.789   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save
00:11:26.789   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:11:26.789   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore
00:11:26.789   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:11:26.789   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:11:26.789   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:11:26.789   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:11:26.789   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:11:26.789   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:11:26.789   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:11:26.789   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:11:26.789   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:11:26.789   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:11:26.789   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:11:26.789   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:11:26.789   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:11:26.790   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:11:26.790   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:11:26.790   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns
00:11:26.790   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:26.790   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:26.790    18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:26.790   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0
00:11:26.790  
00:11:26.790  real	0m2.679s
00:11:26.790  user	0m8.510s
00:11:26.790  sys	0m0.873s
00:11:26.790   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:26.790   18:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:11:26.790  ************************************
00:11:26.790  END TEST nvmf_bdevio
00:11:26.790  ************************************
00:11:27.050   18:55:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:11:27.050  
00:11:27.050  real	3m28.610s
00:11:27.050  user	10m51.707s
00:11:27.050  sys	1m4.237s
00:11:27.050   18:55:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:27.050   18:55:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:11:27.050  ************************************
00:11:27.050  END TEST nvmf_target_core
00:11:27.050  ************************************
00:11:27.050   18:55:58 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp
00:11:27.050   18:55:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:27.050   18:55:58 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:27.050   18:55:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:11:27.050  ************************************
00:11:27.050  START TEST nvmf_target_extra
00:11:27.050  ************************************
00:11:27.050   18:55:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp
00:11:27.050  * Looking for test storage...
00:11:27.050  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:27.050     18:55:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version
00:11:27.050     18:55:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-:
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-:
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<'
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:27.050     18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1
00:11:27.050     18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1
00:11:27.050     18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:27.050     18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1
00:11:27.050     18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2
00:11:27.050     18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2
00:11:27.050     18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:27.050     18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:27.050  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:27.050  		--rc genhtml_branch_coverage=1
00:11:27.050  		--rc genhtml_function_coverage=1
00:11:27.050  		--rc genhtml_legend=1
00:11:27.050  		--rc geninfo_all_blocks=1
00:11:27.050  		--rc geninfo_unexecuted_blocks=1
00:11:27.050  		
00:11:27.050  		'
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:27.050  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:27.050  		--rc genhtml_branch_coverage=1
00:11:27.050  		--rc genhtml_function_coverage=1
00:11:27.050  		--rc genhtml_legend=1
00:11:27.050  		--rc geninfo_all_blocks=1
00:11:27.050  		--rc geninfo_unexecuted_blocks=1
00:11:27.050  		
00:11:27.050  		'
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:27.050  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:27.050  		--rc genhtml_branch_coverage=1
00:11:27.050  		--rc genhtml_function_coverage=1
00:11:27.050  		--rc genhtml_legend=1
00:11:27.050  		--rc geninfo_all_blocks=1
00:11:27.050  		--rc geninfo_unexecuted_blocks=1
00:11:27.050  		
00:11:27.050  		'
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:27.050  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:27.050  		--rc genhtml_branch_coverage=1
00:11:27.050  		--rc genhtml_function_coverage=1
00:11:27.050  		--rc genhtml_legend=1
00:11:27.050  		--rc geninfo_all_blocks=1
00:11:27.050  		--rc geninfo_unexecuted_blocks=1
00:11:27.050  		
00:11:27.050  		'
00:11:27.050   18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:11:27.050     18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:11:27.050     18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:11:27.050    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:11:27.050     18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob
00:11:27.050     18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:27.050     18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:27.050     18:55:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:27.051      18:55:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:27.051      18:55:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:27.051      18:55:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:27.051      18:55:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH
00:11:27.051      18:55:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:27.051    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0
00:11:27.051    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:11:27.051    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:11:27.051    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:11:27.051    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:11:27.051    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:11:27.051    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:11:27.051  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:11:27.051    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:11:27.051    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:11:27.051    18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0
00:11:27.051   18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT
00:11:27.051   18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@")
00:11:27.051   18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]]
00:11:27.051   18:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp
00:11:27.051   18:55:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:27.051   18:55:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:27.051   18:55:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:11:27.051  ************************************
00:11:27.051  START TEST nvmf_example
00:11:27.051  ************************************
00:11:27.051   18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp
00:11:27.311  * Looking for test storage...
00:11:27.311  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:11:27.311    18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:27.311     18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version
00:11:27.311     18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-:
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-:
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<'
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:27.311     18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1
00:11:27.311     18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1
00:11:27.311     18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:27.311     18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1
00:11:27.311     18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2
00:11:27.311     18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2
00:11:27.311     18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:27.311     18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:27.311  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:27.311  		--rc genhtml_branch_coverage=1
00:11:27.311  		--rc genhtml_function_coverage=1
00:11:27.311  		--rc genhtml_legend=1
00:11:27.311  		--rc geninfo_all_blocks=1
00:11:27.311  		--rc geninfo_unexecuted_blocks=1
00:11:27.311  		
00:11:27.311  		'
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:27.311  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:27.311  		--rc genhtml_branch_coverage=1
00:11:27.311  		--rc genhtml_function_coverage=1
00:11:27.311  		--rc genhtml_legend=1
00:11:27.311  		--rc geninfo_all_blocks=1
00:11:27.311  		--rc geninfo_unexecuted_blocks=1
00:11:27.311  		
00:11:27.311  		'
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:27.311  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:27.311  		--rc genhtml_branch_coverage=1
00:11:27.311  		--rc genhtml_function_coverage=1
00:11:27.311  		--rc genhtml_legend=1
00:11:27.311  		--rc geninfo_all_blocks=1
00:11:27.311  		--rc geninfo_unexecuted_blocks=1
00:11:27.311  		
00:11:27.311  		'
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:27.311  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:27.311  		--rc genhtml_branch_coverage=1
00:11:27.311  		--rc genhtml_function_coverage=1
00:11:27.311  		--rc genhtml_legend=1
00:11:27.311  		--rc geninfo_all_blocks=1
00:11:27.311  		--rc geninfo_unexecuted_blocks=1
00:11:27.311  		
00:11:27.311  		'
00:11:27.311   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:11:27.311     18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:11:27.311     18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:11:27.311    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:11:27.311     18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob
00:11:27.312     18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:27.312     18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:27.312     18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:27.312      18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:27.312      18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:27.312      18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:27.312      18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH
00:11:27.312      18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:27.312    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0
00:11:27.312    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:11:27.312    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:11:27.312    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:11:27.312    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:11:27.312    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:11:27.312    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:11:27.312  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:11:27.312    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:11:27.312    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:11:27.312    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf")
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']'
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000)
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}")
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:27.312    18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@460 -- # nvmf_veth_init
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:11:27.312  Cannot find device "nvmf_init_br"
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:11:27.312  Cannot find device "nvmf_init_br2"
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:11:27.312  Cannot find device "nvmf_tgt_br"
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # true
00:11:27.312   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:11:27.571  Cannot find device "nvmf_tgt_br2"
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # true
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:11:27.571  Cannot find device "nvmf_init_br"
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # true
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:11:27.571  Cannot find device "nvmf_init_br2"
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # true
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:11:27.571  Cannot find device "nvmf_tgt_br"
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # true
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:11:27.571  Cannot find device "nvmf_tgt_br2"
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # true
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:11:27.571  Cannot find device "nvmf_br"
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # true
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:11:27.571  Cannot find device "nvmf_init_if"
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # true
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:11:27.571  Cannot find device "nvmf_init_if2"
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # true
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:11:27.571  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # true
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:11:27.571  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # true
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:11:27.571   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:11:27.572   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:11:27.572   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:11:27.572   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:11:27.572   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:11:27.572   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:11:27.572   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:11:27.572   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:11:27.572   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:11:27.572   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:11:27.572   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:11:27.572   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:11:27.572   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:11:27.572   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:11:27.831  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:11:27.831  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms
00:11:27.831  
00:11:27.831  --- 10.0.0.3 ping statistics ---
00:11:27.831  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:27.831  rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:11:27.831  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:11:27.831  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms
00:11:27.831  
00:11:27.831  --- 10.0.0.4 ping statistics ---
00:11:27.831  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:27.831  rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:11:27.831  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:11:27.831  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms
00:11:27.831  
00:11:27.831  --- 10.0.0.1 ping statistics ---
00:11:27.831  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:27.831  rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:11:27.831  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:11:27.831  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms
00:11:27.831  
00:11:27.831  --- 10.0.0.2 ping statistics ---
00:11:27.831  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:27.831  rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@461 -- # return 0
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF'
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']'
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}")
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=85645
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 85645
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 85645 ']'
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:27.831  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:27.831   18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:29.209    18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512
00:11:29.209    18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:29.209    18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:11:29.209    18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 '
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf
00:11:29.209   18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1'
00:11:39.193  Initializing NVMe Controllers
00:11:39.193  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:11:39.193  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:11:39.193  Initialization complete. Launching workers.
00:11:39.193  ========================================================
00:11:39.193                                                                                                               Latency(us)
00:11:39.193  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:11:39.193  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:   16802.80      65.64    3811.53     631.56   23100.05
00:11:39.193  ========================================================
00:11:39.193  Total                                                                    :   16802.80      65.64    3811.53     631.56   23100.05
00:11:39.193  
00:11:39.452   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT
00:11:39.452   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini
00:11:39.452   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup
00:11:39.452   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync
00:11:39.452   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:11:39.452   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e
00:11:39.452   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20}
00:11:39.452   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:11:39.452  rmmod nvme_tcp
00:11:39.452  rmmod nvme_fabrics
00:11:39.452  rmmod nvme_keyring
00:11:39.452   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:11:39.452   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e
00:11:39.452   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0
00:11:39.452   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 85645 ']'
00:11:39.452   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 85645
00:11:39.452   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 85645 ']'
00:11:39.452   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 85645
00:11:39.452    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname
00:11:39.452   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:39.452    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85645
00:11:39.452  killing process with pid 85645
00:11:39.452   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf
00:11:39.452   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']'
00:11:39.452   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85645'
00:11:39.452   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 85645
00:11:39.452   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 85645
00:11:39.711  nvmf threads initialize successfully
00:11:39.711  bdev subsystem init successfully
00:11:39.711  created a nvmf target service
00:11:39.711  create targets's poll groups done
00:11:39.711  all subsystems of target started
00:11:39.711  nvmf target is running
00:11:39.711  all subsystems of target stopped
00:11:39.711  destroy targets's poll groups done
00:11:39.711  destroyed the nvmf target service
00:11:39.711  bdev subsystem finish successfully
00:11:39.711  nvmf threads destroy successfully
00:11:39.711   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:11:39.711   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:11:39.711   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:11:39.711   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr
00:11:39.711   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save
00:11:39.711   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:11:39.711   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore
00:11:39.711   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:11:39.711   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:11:39.711   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:11:39.711   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:11:39.711   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:11:39.711   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:11:39.711   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:11:39.711   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:11:39.711   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:11:39.711   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:11:39.711   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:11:39.711   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:11:39.711   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:11:39.711   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:11:39.970   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:11:39.970   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # remove_spdk_ns
00:11:39.970   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:39.970   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:39.970    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:39.970   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # return 0
00:11:39.970   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test
00:11:39.970   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:39.970   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:11:39.970  
00:11:39.970  real	0m12.771s
00:11:39.970  user	0m44.940s
00:11:39.970  sys	0m2.187s
00:11:39.970   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:39.970   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:11:39.970  ************************************
00:11:39.970  END TEST nvmf_example
00:11:39.970  ************************************
00:11:39.970   18:56:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp
00:11:39.970   18:56:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:39.970   18:56:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:39.970   18:56:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:11:39.970  ************************************
00:11:39.970  START TEST nvmf_filesystem
00:11:39.970  ************************************
00:11:39.970   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp
00:11:39.970  * Looking for test storage...
00:11:39.970  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:11:39.970     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:39.970      18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version
00:11:39.970      18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:40.233     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:40.233     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:40.233     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:40.233     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:40.233     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-:
00:11:40.233     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1
00:11:40.233     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-:
00:11:40.233     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2
00:11:40.233     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<'
00:11:40.233     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2
00:11:40.233     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1
00:11:40.233     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:40.233     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in
00:11:40.233     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1
00:11:40.233     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:40.233     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:40.233      18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1
00:11:40.233      18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1
00:11:40.233      18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:40.233      18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1
00:11:40.233     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1
00:11:40.233      18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2
00:11:40.233      18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2
00:11:40.233      18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:40.233      18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2
00:11:40.233     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2
00:11:40.233     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:40.233     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:40.233     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0
00:11:40.233     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:40.234  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:40.234  		--rc genhtml_branch_coverage=1
00:11:40.234  		--rc genhtml_function_coverage=1
00:11:40.234  		--rc genhtml_legend=1
00:11:40.234  		--rc geninfo_all_blocks=1
00:11:40.234  		--rc geninfo_unexecuted_blocks=1
00:11:40.234  		
00:11:40.234  		'
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:40.234  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:40.234  		--rc genhtml_branch_coverage=1
00:11:40.234  		--rc genhtml_function_coverage=1
00:11:40.234  		--rc genhtml_legend=1
00:11:40.234  		--rc geninfo_all_blocks=1
00:11:40.234  		--rc geninfo_unexecuted_blocks=1
00:11:40.234  		
00:11:40.234  		'
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:40.234  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:40.234  		--rc genhtml_branch_coverage=1
00:11:40.234  		--rc genhtml_function_coverage=1
00:11:40.234  		--rc genhtml_legend=1
00:11:40.234  		--rc geninfo_all_blocks=1
00:11:40.234  		--rc geninfo_unexecuted_blocks=1
00:11:40.234  		
00:11:40.234  		'
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:40.234  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:40.234  		--rc genhtml_branch_coverage=1
00:11:40.234  		--rc genhtml_function_coverage=1
00:11:40.234  		--rc genhtml_legend=1
00:11:40.234  		--rc geninfo_all_blocks=1
00:11:40.234  		--rc geninfo_unexecuted_blocks=1
00:11:40.234  		
00:11:40.234  		'
00:11:40.234   18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh
00:11:40.234    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd
00:11:40.234    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e
00:11:40.234    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob
00:11:40.234    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob
00:11:40.234    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit
00:11:40.234    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']'
00:11:40.234    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]]
00:11:40.234    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR=
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR=
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR=
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH=
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH=
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR=
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR=
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH=
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n
00:11:40.234     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR=
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=y
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH=
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=y
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR=
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX=
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n
00:11:40.235    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:11:40.235       18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:11:40.235      18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt")
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt")
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost")
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd")
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt")
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]]
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H
00:11:40.235  #define SPDK_CONFIG_H
00:11:40.235  #define SPDK_CONFIG_AIO_FSDEV 1
00:11:40.235  #define SPDK_CONFIG_APPS 1
00:11:40.235  #define SPDK_CONFIG_ARCH native
00:11:40.235  #undef SPDK_CONFIG_ASAN
00:11:40.235  #define SPDK_CONFIG_AVAHI 1
00:11:40.235  #undef SPDK_CONFIG_CET
00:11:40.235  #define SPDK_CONFIG_COPY_FILE_RANGE 1
00:11:40.235  #define SPDK_CONFIG_COVERAGE 1
00:11:40.235  #define SPDK_CONFIG_CROSS_PREFIX 
00:11:40.235  #undef SPDK_CONFIG_CRYPTO
00:11:40.235  #undef SPDK_CONFIG_CRYPTO_MLX5
00:11:40.235  #undef SPDK_CONFIG_CUSTOMOCF
00:11:40.235  #undef SPDK_CONFIG_DAOS
00:11:40.235  #define SPDK_CONFIG_DAOS_DIR 
00:11:40.235  #define SPDK_CONFIG_DEBUG 1
00:11:40.235  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:11:40.235  #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build
00:11:40.235  #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include
00:11:40.235  #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib
00:11:40.235  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:11:40.235  #undef SPDK_CONFIG_DPDK_UADK
00:11:40.235  #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:11:40.235  #define SPDK_CONFIG_EXAMPLES 1
00:11:40.235  #undef SPDK_CONFIG_FC
00:11:40.235  #define SPDK_CONFIG_FC_PATH 
00:11:40.235  #define SPDK_CONFIG_FIO_PLUGIN 1
00:11:40.235  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:11:40.235  #define SPDK_CONFIG_FSDEV 1
00:11:40.235  #undef SPDK_CONFIG_FUSE
00:11:40.235  #undef SPDK_CONFIG_FUZZER
00:11:40.235  #define SPDK_CONFIG_FUZZER_LIB 
00:11:40.235  #define SPDK_CONFIG_GOLANG 1
00:11:40.235  #define SPDK_CONFIG_HAVE_ARC4RANDOM 1
00:11:40.235  #define SPDK_CONFIG_HAVE_EVP_MAC 1
00:11:40.235  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:11:40.235  #define SPDK_CONFIG_HAVE_KEYUTILS 1
00:11:40.235  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:11:40.235  #undef SPDK_CONFIG_HAVE_LIBBSD
00:11:40.235  #undef SPDK_CONFIG_HAVE_LZ4
00:11:40.235  #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1
00:11:40.235  #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC
00:11:40.235  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:11:40.235  #define SPDK_CONFIG_IDXD 1
00:11:40.235  #define SPDK_CONFIG_IDXD_KERNEL 1
00:11:40.235  #undef SPDK_CONFIG_IPSEC_MB
00:11:40.235  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:11:40.235  #define SPDK_CONFIG_ISAL 1
00:11:40.235  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:11:40.235  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:11:40.235  #define SPDK_CONFIG_LIBDIR 
00:11:40.235  #undef SPDK_CONFIG_LTO
00:11:40.235  #define SPDK_CONFIG_MAX_LCORES 128
00:11:40.235  #define SPDK_CONFIG_MAX_NUMA_NODES 1
00:11:40.235  #define SPDK_CONFIG_NVME_CUSE 1
00:11:40.235  #undef SPDK_CONFIG_OCF
00:11:40.235  #define SPDK_CONFIG_OCF_PATH 
00:11:40.235  #define SPDK_CONFIG_OPENSSL_PATH 
00:11:40.235  #undef SPDK_CONFIG_PGO_CAPTURE
00:11:40.235  #define SPDK_CONFIG_PGO_DIR 
00:11:40.235  #undef SPDK_CONFIG_PGO_USE
00:11:40.235  #define SPDK_CONFIG_PREFIX /usr/local
00:11:40.235  #undef SPDK_CONFIG_RAID5F
00:11:40.235  #undef SPDK_CONFIG_RBD
00:11:40.235  #define SPDK_CONFIG_RDMA 1
00:11:40.235  #define SPDK_CONFIG_RDMA_PROV verbs
00:11:40.235  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:11:40.235  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:11:40.235  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:11:40.235  #define SPDK_CONFIG_SHARED 1
00:11:40.235  #undef SPDK_CONFIG_SMA
00:11:40.235  #define SPDK_CONFIG_TESTS 1
00:11:40.235  #undef SPDK_CONFIG_TSAN
00:11:40.235  #define SPDK_CONFIG_UBLK 1
00:11:40.235  #define SPDK_CONFIG_UBSAN 1
00:11:40.235  #undef SPDK_CONFIG_UNIT_TESTS
00:11:40.235  #undef SPDK_CONFIG_URING
00:11:40.235  #define SPDK_CONFIG_URING_PATH 
00:11:40.235  #undef SPDK_CONFIG_URING_ZNS
00:11:40.235  #define SPDK_CONFIG_USDT 1
00:11:40.235  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:11:40.235  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:11:40.235  #define SPDK_CONFIG_VFIO_USER 1
00:11:40.235  #define SPDK_CONFIG_VFIO_USER_DIR 
00:11:40.235  #define SPDK_CONFIG_VHOST 1
00:11:40.235  #define SPDK_CONFIG_VIRTIO 1
00:11:40.235  #undef SPDK_CONFIG_VTUNE
00:11:40.235  #define SPDK_CONFIG_VTUNE_DIR 
00:11:40.235  #define SPDK_CONFIG_WERROR 1
00:11:40.235  #define SPDK_CONFIG_WPDK_DIR 
00:11:40.235  #undef SPDK_CONFIG_XNVME
00:11:40.235  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS ))
00:11:40.235    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob
00:11:40.235     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:40.236     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:40.236     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:40.236      18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:40.236      18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:40.236      18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:40.236      18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH
00:11:40.236      18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:11:40.236       18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:11:40.236      18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:11:40.236     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:11:40.236      18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../
00:11:40.236     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk
00:11:40.236     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A
00:11:40.236     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name
00:11:40.236     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power
00:11:40.236      18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s
00:11:40.236     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux
00:11:40.236     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=()
00:11:40.236     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO
00:11:40.236     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1
00:11:40.236     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0
00:11:40.236     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0
00:11:40.236     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0
00:11:40.236     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]=
00:11:40.236     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E'
00:11:40.236     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat)
00:11:40.236     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]]
00:11:40.236     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]]
00:11:40.236     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]]
00:11:40.236     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]]
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # :
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0
00:11:40.236    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # :
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # :
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes
00:11:40.237    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']'
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV=
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]]
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]]
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]=
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt=
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']'
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind=
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind=
00:11:40.238     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']'
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=()
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE=
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@"
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 85924 ]]
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 85924
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]]
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates
00:11:40.238     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.Vlrrfe
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]]
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]]
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.Vlrrfe/tests/target /tmp/spdk.Vlrrfe
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512
00:11:40.238    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:11:40.238     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T
00:11:40.239     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13239492608
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6345990144
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6256394240
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=2486431744
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=20140032
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13239492608
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6345990144
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6266281984
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=143360
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97182261248
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=2520518656
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n'
00:11:40.239  * Looking for test storage...
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}"
00:11:40.239     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}'
00:11:40.239     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/home
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=13239492608
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size ))
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size ))
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]]
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]]
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ /home == / ]]
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:11:40.239  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]]
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]]
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec
00:11:40.239    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore
00:11:40.240    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:11:40.240    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:11:40.240    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x
00:11:40.240    18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:40.240     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version
00:11:40.240     18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:40.240    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:40.240    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:40.240    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:40.240    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:40.240    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-:
00:11:40.240    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1
00:11:40.240    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-:
00:11:40.240    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2
00:11:40.240    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<'
00:11:40.240    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2
00:11:40.240    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1
00:11:40.240    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:40.500     18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1
00:11:40.500     18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1
00:11:40.500     18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:40.500     18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1
00:11:40.500     18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2
00:11:40.500     18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2
00:11:40.500     18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:40.500     18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:40.500  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:40.500  		--rc genhtml_branch_coverage=1
00:11:40.500  		--rc genhtml_function_coverage=1
00:11:40.500  		--rc genhtml_legend=1
00:11:40.500  		--rc geninfo_all_blocks=1
00:11:40.500  		--rc geninfo_unexecuted_blocks=1
00:11:40.500  		
00:11:40.500  		'
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:40.500  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:40.500  		--rc genhtml_branch_coverage=1
00:11:40.500  		--rc genhtml_function_coverage=1
00:11:40.500  		--rc genhtml_legend=1
00:11:40.500  		--rc geninfo_all_blocks=1
00:11:40.500  		--rc geninfo_unexecuted_blocks=1
00:11:40.500  		
00:11:40.500  		'
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:40.500  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:40.500  		--rc genhtml_branch_coverage=1
00:11:40.500  		--rc genhtml_function_coverage=1
00:11:40.500  		--rc genhtml_legend=1
00:11:40.500  		--rc geninfo_all_blocks=1
00:11:40.500  		--rc geninfo_unexecuted_blocks=1
00:11:40.500  		
00:11:40.500  		'
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:40.500  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:40.500  		--rc genhtml_branch_coverage=1
00:11:40.500  		--rc genhtml_function_coverage=1
00:11:40.500  		--rc genhtml_legend=1
00:11:40.500  		--rc geninfo_all_blocks=1
00:11:40.500  		--rc geninfo_unexecuted_blocks=1
00:11:40.500  		
00:11:40.500  		'
00:11:40.500   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:11:40.500     18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:11:40.500     18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:11:40.500     18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob
00:11:40.500     18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:40.500     18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:40.500     18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:40.500      18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:40.500      18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:40.500      18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:40.500      18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH
00:11:40.500      18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:11:40.500  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:11:40.500    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0
00:11:40.500   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512
00:11:40.500   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:11:40.500   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:40.501    18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@460 -- # nvmf_veth_init
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:11:40.501  Cannot find device "nvmf_init_br"
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:11:40.501  Cannot find device "nvmf_init_br2"
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:11:40.501  Cannot find device "nvmf_tgt_br"
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # true
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:11:40.501  Cannot find device "nvmf_tgt_br2"
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # true
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:11:40.501  Cannot find device "nvmf_init_br"
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # true
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:11:40.501  Cannot find device "nvmf_init_br2"
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # true
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:11:40.501  Cannot find device "nvmf_tgt_br"
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # true
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:11:40.501  Cannot find device "nvmf_tgt_br2"
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # true
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:11:40.501  Cannot find device "nvmf_br"
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # true
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:11:40.501  Cannot find device "nvmf_init_if"
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # true
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:11:40.501  Cannot find device "nvmf_init_if2"
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # true
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:11:40.501  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # true
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:11:40.501  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # true
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:11:40.501   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:11:40.761  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:11:40.761  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms
00:11:40.761  
00:11:40.761  --- 10.0.0.3 ping statistics ---
00:11:40.761  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:40.761  rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:11:40.761  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:11:40.761  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms
00:11:40.761  
00:11:40.761  --- 10.0.0.4 ping statistics ---
00:11:40.761  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:40.761  rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:11:40.761  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:11:40.761  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms
00:11:40.761  
00:11:40.761  --- 10.0.0.1 ping statistics ---
00:11:40.761  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:40.761  rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:11:40.761  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:11:40.761  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms
00:11:40.761  
00:11:40.761  --- 10.0.0.2 ping statistics ---
00:11:40.761  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:40.761  rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@461 -- # return 0
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:11:40.761   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:11:40.762   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:11:40.762   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:11:40.762   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:11:40.762   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:11:40.762   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0
00:11:40.762   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:40.762   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:40.762   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x
00:11:40.762  ************************************
00:11:40.762  START TEST nvmf_filesystem_no_in_capsule
00:11:40.762  ************************************
00:11:40.762   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0
00:11:40.762   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0
00:11:40.762   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF
00:11:40.762   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:11:40.762   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:40.762   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:40.762   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=86113
00:11:40.762   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:11:40.762   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 86113
00:11:40.762   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 86113 ']'
00:11:40.762   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:40.762   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:40.762  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:40.762   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:40.762   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:40.762   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:40.762  [2024-12-13 18:56:12.564735] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:11:40.762  [2024-12-13 18:56:12.564822] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:41.021  [2024-12-13 18:56:12.715444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:11:41.021  [2024-12-13 18:56:12.751162] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:11:41.021  [2024-12-13 18:56:12.751257] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:11:41.021  [2024-12-13 18:56:12.751270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:11:41.021  [2024-12-13 18:56:12.751279] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:11:41.021  [2024-12-13 18:56:12.751286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:11:41.021  [2024-12-13 18:56:12.752500] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:11:41.021  [2024-12-13 18:56:12.752577] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:11:41.021  [2024-12-13 18:56:12.752710] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:11:41.021  [2024-12-13 18:56:12.752715] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:11:41.279   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:41.279   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0
00:11:41.279   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:11:41.280   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:41.280   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:41.280   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:11:41.280   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1
00:11:41.280   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0
00:11:41.280   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:41.280   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:41.280  [2024-12-13 18:56:12.927473] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:11:41.280   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:41.280   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1
00:11:41.280   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:41.280   18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:41.280  Malloc1
00:11:41.280   18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:41.280   18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:11:41.280   18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:41.280   18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:41.280   18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:41.280   18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:11:41.280   18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:41.280   18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:41.280   18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:41.280   18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:11:41.280   18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:41.280   18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:41.539  [2024-12-13 18:56:13.103186] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:11:41.539   18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:41.539    18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1
00:11:41.539    18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1
00:11:41.539    18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info
00:11:41.539    18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs
00:11:41.539    18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb
00:11:41.539     18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1
00:11:41.539     18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:41.539     18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:41.539     18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:41.539    18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[
00:11:41.539  {
00:11:41.539  "aliases": [
00:11:41.539  "0d5bd46d-0f2a-4cd7-a656-b2085799b86c"
00:11:41.539  ],
00:11:41.539  "assigned_rate_limits": {
00:11:41.539  "r_mbytes_per_sec": 0,
00:11:41.539  "rw_ios_per_sec": 0,
00:11:41.539  "rw_mbytes_per_sec": 0,
00:11:41.539  "w_mbytes_per_sec": 0
00:11:41.539  },
00:11:41.539  "block_size": 512,
00:11:41.539  "claim_type": "exclusive_write",
00:11:41.539  "claimed": true,
00:11:41.539  "driver_specific": {},
00:11:41.539  "memory_domains": [
00:11:41.539  {
00:11:41.539  "dma_device_id": "system",
00:11:41.539  "dma_device_type": 1
00:11:41.539  },
00:11:41.539  {
00:11:41.539  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:41.539  "dma_device_type": 2
00:11:41.539  }
00:11:41.539  ],
00:11:41.539  "name": "Malloc1",
00:11:41.539  "num_blocks": 1048576,
00:11:41.539  "product_name": "Malloc disk",
00:11:41.539  "supported_io_types": {
00:11:41.539  "abort": true,
00:11:41.539  "compare": false,
00:11:41.539  "compare_and_write": false,
00:11:41.539  "copy": true,
00:11:41.539  "flush": true,
00:11:41.539  "get_zone_info": false,
00:11:41.539  "nvme_admin": false,
00:11:41.539  "nvme_io": false,
00:11:41.539  "nvme_io_md": false,
00:11:41.539  "nvme_iov_md": false,
00:11:41.539  "read": true,
00:11:41.539  "reset": true,
00:11:41.539  "seek_data": false,
00:11:41.539  "seek_hole": false,
00:11:41.539  "unmap": true,
00:11:41.539  "write": true,
00:11:41.539  "write_zeroes": true,
00:11:41.539  "zcopy": true,
00:11:41.539  "zone_append": false,
00:11:41.539  "zone_management": false
00:11:41.539  },
00:11:41.539  "uuid": "0d5bd46d-0f2a-4cd7-a656-b2085799b86c",
00:11:41.539  "zoned": false
00:11:41.539  }
00:11:41.539  ]'
00:11:41.539     18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:11:41.539    18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512
00:11:41.539     18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:11:41.539    18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576
00:11:41.539    18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512
00:11:41.539    18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512
00:11:41.539   18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912
00:11:41.539   18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420
00:11:41.797   18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME
00:11:41.797   18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0
00:11:41.797   18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:11:41.797   18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:11:41.798   18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2
00:11:43.698   18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:11:43.698    18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:11:43.698    18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:11:43.698   18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:11:43.698   18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:11:43.698   18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0
00:11:43.698    18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL
00:11:43.698    18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)'
00:11:43.698   18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1
00:11:43.698    18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1
00:11:43.698    18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1
00:11:43.698    18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]]
00:11:43.698    18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912
00:11:43.698   18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912
00:11:43.698   18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device
00:11:43.698   18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size ))
00:11:43.698   18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100%
00:11:43.956   18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe
00:11:43.956   18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1
00:11:44.893   18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']'
00:11:44.893   18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1
00:11:44.893   18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:11:44.893   18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:44.893   18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:44.893  ************************************
00:11:44.893  START TEST filesystem_ext4
00:11:44.893  ************************************
00:11:44.893   18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1
00:11:44.893   18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4
00:11:44.893   18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:11:44.893   18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1
00:11:44.893   18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4
00:11:44.893   18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:11:44.893   18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0
00:11:44.893   18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force
00:11:44.893   18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']'
00:11:44.893   18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F
00:11:44.893   18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1
00:11:44.893  mke2fs 1.47.0 (5-Feb-2023)
00:11:45.152  Discarding device blocks:      0/522240             done                            
00:11:45.152  Creating filesystem with 522240 1k blocks and 130560 inodes
00:11:45.152  Filesystem UUID: d48e368c-4251-4e2c-9707-12092641f33d
00:11:45.152  Superblock backups stored on blocks: 
00:11:45.152  	8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409
00:11:45.152  
00:11:45.152  Allocating group tables:  0/64     done                            
00:11:45.152  Writing inode tables:  0/64     done                            
00:11:45.152  Creating journal (8192 blocks): done
00:11:45.152  Writing superblocks and filesystem accounting information:  0/64     done
00:11:45.152  
00:11:45.152   18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0
00:11:45.152   18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:11:50.425   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:11:50.425   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync
00:11:50.425   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:11:50.425   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync
00:11:50.425   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0
00:11:50.425   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device
00:11:50.425   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 86113
00:11:50.425   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:11:50.425   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:11:50.425   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:11:50.425   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:11:50.425  
00:11:50.425  real	0m5.591s
00:11:50.425  user	0m0.033s
00:11:50.425  sys	0m0.052s
00:11:50.426   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:50.426   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x
00:11:50.426  ************************************
00:11:50.426  END TEST filesystem_ext4
00:11:50.426  ************************************
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:50.685  ************************************
00:11:50.685  START TEST filesystem_btrfs
00:11:50.685  ************************************
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']'
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1
00:11:50.685  btrfs-progs v6.8.1
00:11:50.685  See https://btrfs.readthedocs.io for more information.
00:11:50.685  
00:11:50.685  Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ...
00:11:50.685  NOTE: several default settings have changed in version 5.15, please make sure
00:11:50.685        this does not affect your deployments:
00:11:50.685        - DUP for metadata (-m dup)
00:11:50.685        - enabled no-holes (-O no-holes)
00:11:50.685        - enabled free-space-tree (-R free-space-tree)
00:11:50.685  
00:11:50.685  Label:              (null)
00:11:50.685  UUID:               f27d4c38-de05-4aed-908a-c193f3e3c966
00:11:50.685  Node size:          16384
00:11:50.685  Sector size:        4096	(CPU page size: 4096)
00:11:50.685  Filesystem size:    510.00MiB
00:11:50.685  Block group profiles:
00:11:50.685    Data:             single            8.00MiB
00:11:50.685    Metadata:         DUP              32.00MiB
00:11:50.685    System:           DUP               8.00MiB
00:11:50.685  SSD detected:       yes
00:11:50.685  Zoned device:       no
00:11:50.685  Features:           extref, skinny-metadata, no-holes, free-space-tree
00:11:50.685  Checksum:           crc32c
00:11:50.685  Number of devices:  1
00:11:50.685  Devices:
00:11:50.685     ID        SIZE  PATH          
00:11:50.685      1   510.00MiB  /dev/nvme0n1p1
00:11:50.685  
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 86113
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:11:50.685   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:11:50.686   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:11:50.686  
00:11:50.686  real	0m0.221s
00:11:50.686  user	0m0.020s
00:11:50.686  sys	0m0.058s
00:11:50.686   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:50.686   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x
00:11:50.686  ************************************
00:11:50.686  END TEST filesystem_btrfs
00:11:50.686  ************************************
00:11:50.945   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1
00:11:50.945   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:11:50.945   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:50.945   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:50.945  ************************************
00:11:50.945  START TEST filesystem_xfs
00:11:50.945  ************************************
00:11:50.945   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1
00:11:50.945   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs
00:11:50.945   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:11:50.945   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1
00:11:50.945   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs
00:11:50.945   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:11:50.945   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0
00:11:50.945   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force
00:11:50.945   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']'
00:11:50.945   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f
00:11:50.945   18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1
00:11:50.945  meta-data=/dev/nvme0n1p1         isize=512    agcount=4, agsize=32640 blks
00:11:50.945           =                       sectsz=512   attr=2, projid32bit=1
00:11:50.945           =                       crc=1        finobt=1, sparse=1, rmapbt=0
00:11:50.945           =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
00:11:50.945  data     =                       bsize=4096   blocks=130560, imaxpct=25
00:11:50.945           =                       sunit=0      swidth=0 blks
00:11:50.945  naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
00:11:50.945  log      =internal log           bsize=4096   blocks=16384, version=2
00:11:50.945           =                       sectsz=512   sunit=0 blks, lazy-count=1
00:11:50.945  realtime =none                   extsz=4096   blocks=0, rtextents=0
00:11:51.513  Discarding blocks...Done.
00:11:51.513   18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0
00:11:51.513   18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:11:54.074   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:11:54.074   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync
00:11:54.074   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:11:54.074   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync
00:11:54.074   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0
00:11:54.074   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device
00:11:54.074   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 86113
00:11:54.074   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:11:54.074   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:11:54.074   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:11:54.074   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:11:54.074  
00:11:54.074  real	0m3.114s
00:11:54.074  user	0m0.024s
00:11:54.074  sys	0m0.048s
00:11:54.074   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:54.074   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x
00:11:54.074  ************************************
00:11:54.074  END TEST filesystem_xfs
00:11:54.074  ************************************
00:11:54.074   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1
00:11:54.074   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync
00:11:54.074   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:11:54.074  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:11:54.074   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:11:54.074   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0
00:11:54.074   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:11:54.074   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:54.074   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:11:54.075   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:11:54.075   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0
00:11:54.075   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:54.075   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:54.075   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:54.075   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:54.075   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT
00:11:54.075   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 86113
00:11:54.075   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 86113 ']'
00:11:54.075   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 86113
00:11:54.075    18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname
00:11:54.075   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:54.075    18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86113
00:11:54.075   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:11:54.075   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:11:54.075  killing process with pid 86113
00:11:54.075   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86113'
00:11:54.075   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 86113
00:11:54.075   18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 86113
00:11:54.672   18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid=
00:11:54.672  
00:11:54.672  real	0m13.732s
00:11:54.672  user	0m52.476s
00:11:54.672  sys	0m1.973s
00:11:54.672   18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:54.672   18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:54.672  ************************************
00:11:54.672  END TEST nvmf_filesystem_no_in_capsule
00:11:54.672  ************************************
00:11:54.672   18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096
00:11:54.672   18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:54.672   18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:54.672   18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x
00:11:54.672  ************************************
00:11:54.672  START TEST nvmf_filesystem_in_capsule
00:11:54.672  ************************************
00:11:54.672   18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096
00:11:54.672   18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096
00:11:54.672   18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF
00:11:54.672   18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:11:54.672   18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:54.672   18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:54.672   18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=86471
00:11:54.672   18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 86471
00:11:54.672   18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:11:54.672   18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 86471 ']'
00:11:54.672   18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:54.672   18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:54.672  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:54.672   18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:54.672   18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:54.672   18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:54.672  [2024-12-13 18:56:26.351932] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:11:54.672  [2024-12-13 18:56:26.352040] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:54.931  [2024-12-13 18:56:26.494607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:11:54.931  [2024-12-13 18:56:26.536555] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:11:54.931  [2024-12-13 18:56:26.536653] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:11:54.931  [2024-12-13 18:56:26.536664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:11:54.931  [2024-12-13 18:56:26.536672] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:11:54.931  [2024-12-13 18:56:26.536679] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:11:54.931  [2024-12-13 18:56:26.537921] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:11:54.931  [2024-12-13 18:56:26.538046] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:11:54.931  [2024-12-13 18:56:26.538095] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:11:54.931  [2024-12-13 18:56:26.538098] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:55.868  [2024-12-13 18:56:27.393081] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:55.868  Malloc1
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:55.868  [2024-12-13 18:56:27.568487] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:11:55.868   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:55.868    18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1
00:11:55.868    18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1
00:11:55.868    18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info
00:11:55.868    18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs
00:11:55.868    18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb
00:11:55.868     18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1
00:11:55.868     18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:55.869     18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:55.869     18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:55.869    18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[
00:11:55.869  {
00:11:55.869  "aliases": [
00:11:55.869  "56b4fa83-6faf-4bdd-b385-a2dc9583c72a"
00:11:55.869  ],
00:11:55.869  "assigned_rate_limits": {
00:11:55.869  "r_mbytes_per_sec": 0,
00:11:55.869  "rw_ios_per_sec": 0,
00:11:55.869  "rw_mbytes_per_sec": 0,
00:11:55.869  "w_mbytes_per_sec": 0
00:11:55.869  },
00:11:55.869  "block_size": 512,
00:11:55.869  "claim_type": "exclusive_write",
00:11:55.869  "claimed": true,
00:11:55.869  "driver_specific": {},
00:11:55.869  "memory_domains": [
00:11:55.869  {
00:11:55.869  "dma_device_id": "system",
00:11:55.869  "dma_device_type": 1
00:11:55.869  },
00:11:55.869  {
00:11:55.869  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:55.869  "dma_device_type": 2
00:11:55.869  }
00:11:55.869  ],
00:11:55.869  "name": "Malloc1",
00:11:55.869  "num_blocks": 1048576,
00:11:55.869  "product_name": "Malloc disk",
00:11:55.869  "supported_io_types": {
00:11:55.869  "abort": true,
00:11:55.869  "compare": false,
00:11:55.869  "compare_and_write": false,
00:11:55.869  "copy": true,
00:11:55.869  "flush": true,
00:11:55.869  "get_zone_info": false,
00:11:55.869  "nvme_admin": false,
00:11:55.869  "nvme_io": false,
00:11:55.869  "nvme_io_md": false,
00:11:55.869  "nvme_iov_md": false,
00:11:55.869  "read": true,
00:11:55.869  "reset": true,
00:11:55.869  "seek_data": false,
00:11:55.869  "seek_hole": false,
00:11:55.869  "unmap": true,
00:11:55.869  "write": true,
00:11:55.869  "write_zeroes": true,
00:11:55.869  "zcopy": true,
00:11:55.869  "zone_append": false,
00:11:55.869  "zone_management": false
00:11:55.869  },
00:11:55.869  "uuid": "56b4fa83-6faf-4bdd-b385-a2dc9583c72a",
00:11:55.869  "zoned": false
00:11:55.869  }
00:11:55.869  ]'
00:11:55.869     18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:11:55.869    18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512
00:11:55.869     18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:11:55.869    18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576
00:11:55.869    18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512
00:11:55.869    18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512
00:11:56.128   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912
00:11:56.128   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420
00:11:56.128   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME
00:11:56.128   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0
00:11:56.128   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:11:56.128   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:11:56.128   18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2
00:11:58.663   18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:11:58.663    18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:11:58.663    18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:11:58.663   18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:11:58.663   18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:11:58.663   18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0
00:11:58.663    18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)'
00:11:58.663    18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL
00:11:58.663   18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1
00:11:58.663    18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1
00:11:58.663    18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1
00:11:58.663    18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]]
00:11:58.663    18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912
00:11:58.663   18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912
00:11:58.663   18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device
00:11:58.663   18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size ))
00:11:58.663   18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100%
00:11:58.663   18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe
00:11:58.663   18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1
00:11:59.232   18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']'
00:11:59.232   18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1
00:11:59.232   18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:11:59.232   18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:59.232   18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:11:59.232  ************************************
00:11:59.232  START TEST filesystem_in_capsule_ext4
00:11:59.232  ************************************
00:11:59.232   18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1
00:11:59.232   18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4
00:11:59.232   18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:11:59.232   18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1
00:11:59.232   18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4
00:11:59.232   18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:11:59.232   18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0
00:11:59.232   18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force
00:11:59.232   18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']'
00:11:59.232   18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F
00:11:59.232   18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1
00:11:59.232  mke2fs 1.47.0 (5-Feb-2023)
00:11:59.491  Discarding device blocks:      0/522240             done                            
00:11:59.491  Creating filesystem with 522240 1k blocks and 130560 inodes
00:11:59.491  Filesystem UUID: 08283e4f-3c51-4331-b4f0-45e3da596494
00:11:59.491  Superblock backups stored on blocks: 
00:11:59.491  	8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409
00:11:59.491  
00:11:59.491  Allocating group tables:  0/64     done                            
00:11:59.491  Writing inode tables:  0/64     done                            
00:11:59.491  Creating journal (8192 blocks): done
00:11:59.491  Writing superblocks and filesystem accounting information:  0/64     done
00:11:59.491  
00:11:59.491   18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0
00:11:59.491   18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:12:04.763   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:12:04.763   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync
00:12:04.763   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:12:04.763   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync
00:12:04.763   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0
00:12:04.763   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device
00:12:04.763   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 86471
00:12:04.763   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:12:04.763   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:12:04.763   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:12:04.763   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:12:04.763  
00:12:04.763  real	0m5.549s
00:12:04.763  user	0m0.031s
00:12:04.763  sys	0m0.051s
00:12:04.763   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:04.763   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x
00:12:04.763  ************************************
00:12:04.763  END TEST filesystem_in_capsule_ext4
00:12:04.763  ************************************
00:12:05.022   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1
00:12:05.022   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:12:05.022   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:05.022   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:05.022  ************************************
00:12:05.022  START TEST filesystem_in_capsule_btrfs
00:12:05.022  ************************************
00:12:05.022   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1
00:12:05.022   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs
00:12:05.022   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:12:05.022   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1
00:12:05.022   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs
00:12:05.022   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:12:05.022   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0
00:12:05.022   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force
00:12:05.022   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']'
00:12:05.022   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f
00:12:05.022   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1
00:12:05.022  btrfs-progs v6.8.1
00:12:05.022  See https://btrfs.readthedocs.io for more information.
00:12:05.022  
00:12:05.022  Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ...
00:12:05.022  NOTE: several default settings have changed in version 5.15, please make sure
00:12:05.022        this does not affect your deployments:
00:12:05.022        - DUP for metadata (-m dup)
00:12:05.022        - enabled no-holes (-O no-holes)
00:12:05.022        - enabled free-space-tree (-R free-space-tree)
00:12:05.022  
00:12:05.022  Label:              (null)
00:12:05.022  UUID:               e0be931e-61b0-4964-83f6-59c5884f89ba
00:12:05.022  Node size:          16384
00:12:05.022  Sector size:        4096	(CPU page size: 4096)
00:12:05.022  Filesystem size:    510.00MiB
00:12:05.022  Block group profiles:
00:12:05.022    Data:             single            8.00MiB
00:12:05.022    Metadata:         DUP              32.00MiB
00:12:05.022    System:           DUP               8.00MiB
00:12:05.022  SSD detected:       yes
00:12:05.022  Zoned device:       no
00:12:05.022  Features:           extref, skinny-metadata, no-holes, free-space-tree
00:12:05.022  Checksum:           crc32c
00:12:05.022  Number of devices:  1
00:12:05.022  Devices:
00:12:05.022     ID        SIZE  PATH          
00:12:05.022      1   510.00MiB  /dev/nvme0n1p1
00:12:05.022  
00:12:05.022   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0
00:12:05.022   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:12:05.022   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:12:05.022   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync
00:12:05.023   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:12:05.023   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync
00:12:05.023   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0
00:12:05.023   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device
00:12:05.282   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 86471
00:12:05.282   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:12:05.282   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:12:05.282   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:12:05.282   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:12:05.282  
00:12:05.282  real	0m0.257s
00:12:05.282  user	0m0.023s
00:12:05.282  sys	0m0.058s
00:12:05.282   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:05.282   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x
00:12:05.282  ************************************
00:12:05.282  END TEST filesystem_in_capsule_btrfs
00:12:05.282  ************************************
00:12:05.282   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1
00:12:05.282   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:12:05.282   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:05.282   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:05.282  ************************************
00:12:05.282  START TEST filesystem_in_capsule_xfs
00:12:05.282  ************************************
00:12:05.282   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1
00:12:05.282   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs
00:12:05.282   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:12:05.282   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1
00:12:05.282   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs
00:12:05.282   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:12:05.282   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0
00:12:05.282   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force
00:12:05.282   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']'
00:12:05.282   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f
00:12:05.282   18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1
00:12:05.282  meta-data=/dev/nvme0n1p1         isize=512    agcount=4, agsize=32640 blks
00:12:05.282           =                       sectsz=512   attr=2, projid32bit=1
00:12:05.282           =                       crc=1        finobt=1, sparse=1, rmapbt=0
00:12:05.282           =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
00:12:05.282  data     =                       bsize=4096   blocks=130560, imaxpct=25
00:12:05.282           =                       sunit=0      swidth=0 blks
00:12:05.282  naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
00:12:05.282  log      =internal log           bsize=4096   blocks=16384, version=2
00:12:05.282           =                       sectsz=512   sunit=0 blks, lazy-count=1
00:12:05.282  realtime =none                   extsz=4096   blocks=0, rtextents=0
00:12:06.218  Discarding blocks...Done.
00:12:06.218   18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0
00:12:06.218   18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:12:08.121   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:12:08.121   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync
00:12:08.121   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:12:08.121   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync
00:12:08.121   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0
00:12:08.121   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device
00:12:08.121   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 86471
00:12:08.121   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:12:08.121   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:12:08.121   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:12:08.121   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:12:08.121  
00:12:08.121  real	0m2.664s
00:12:08.121  user	0m0.021s
00:12:08.121  sys	0m0.056s
00:12:08.121   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:08.121   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x
00:12:08.122  ************************************
00:12:08.122  END TEST filesystem_in_capsule_xfs
00:12:08.122  ************************************
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:12:08.122  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 86471
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 86471 ']'
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 86471
00:12:08.122    18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:08.122    18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86471
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:08.122  killing process with pid 86471
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86471'
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 86471
00:12:08.122   18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 86471
00:12:08.381   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid=
00:12:08.381  
00:12:08.381  real	0m13.891s
00:12:08.381  user	0m53.400s
00:12:08.381  sys	0m1.939s
00:12:08.381   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:08.381  ************************************
00:12:08.381  END TEST nvmf_filesystem_in_capsule
00:12:08.381   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:12:08.381  ************************************
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20}
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:12:08.640  rmmod nvme_tcp
00:12:08.640  rmmod nvme_fabrics
00:12:08.640  rmmod nvme_keyring
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']'
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:12:08.640   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:12:08.899   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:12:08.899   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:12:08.899   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # remove_spdk_ns
00:12:08.899   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:12:08.899   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:12:08.899    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:12:08.899   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # return 0
00:12:08.899  
00:12:08.899  real	0m28.857s
00:12:08.899  user	1m46.301s
00:12:08.899  sys	0m4.423s
00:12:08.900   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:08.900   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x
00:12:08.900  ************************************
00:12:08.900  END TEST nvmf_filesystem
00:12:08.900  ************************************
00:12:08.900   18:56:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp
00:12:08.900   18:56:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:12:08.900   18:56:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:08.900   18:56:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:12:08.900  ************************************
00:12:08.900  START TEST nvmf_target_discovery
00:12:08.900  ************************************
00:12:08.900   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp
00:12:08.900  * Looking for test storage...
00:12:08.900  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:12:08.900    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:12:08.900     18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version
00:12:08.900     18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-:
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-:
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<'
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:09.160     18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1
00:12:09.160     18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1
00:12:09.160     18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:09.160     18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1
00:12:09.160     18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2
00:12:09.160     18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2
00:12:09.160     18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:09.160     18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:12:09.160  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:09.160  		--rc genhtml_branch_coverage=1
00:12:09.160  		--rc genhtml_function_coverage=1
00:12:09.160  		--rc genhtml_legend=1
00:12:09.160  		--rc geninfo_all_blocks=1
00:12:09.160  		--rc geninfo_unexecuted_blocks=1
00:12:09.160  		
00:12:09.160  		'
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:12:09.160  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:09.160  		--rc genhtml_branch_coverage=1
00:12:09.160  		--rc genhtml_function_coverage=1
00:12:09.160  		--rc genhtml_legend=1
00:12:09.160  		--rc geninfo_all_blocks=1
00:12:09.160  		--rc geninfo_unexecuted_blocks=1
00:12:09.160  		
00:12:09.160  		'
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:12:09.160  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:09.160  		--rc genhtml_branch_coverage=1
00:12:09.160  		--rc genhtml_function_coverage=1
00:12:09.160  		--rc genhtml_legend=1
00:12:09.160  		--rc geninfo_all_blocks=1
00:12:09.160  		--rc geninfo_unexecuted_blocks=1
00:12:09.160  		
00:12:09.160  		'
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:12:09.160  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:09.160  		--rc genhtml_branch_coverage=1
00:12:09.160  		--rc genhtml_function_coverage=1
00:12:09.160  		--rc genhtml_legend=1
00:12:09.160  		--rc geninfo_all_blocks=1
00:12:09.160  		--rc geninfo_unexecuted_blocks=1
00:12:09.160  		
00:12:09.160  		'
00:12:09.160   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:12:09.160     18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:12:09.160     18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:12:09.160     18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob
00:12:09.160     18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:09.160     18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:09.160     18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:09.160      18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:09.160      18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:09.160      18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:09.160      18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH
00:12:09.160      18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:12:09.160  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:12:09.160    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0
00:12:09.160   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400
00:12:09.160   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:12:09.161    18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:12:09.161  Cannot find device "nvmf_init_br"
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:12:09.161  Cannot find device "nvmf_init_br2"
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:12:09.161  Cannot find device "nvmf_tgt_br"
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # true
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:12:09.161  Cannot find device "nvmf_tgt_br2"
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # true
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:12:09.161  Cannot find device "nvmf_init_br"
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # true
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:12:09.161  Cannot find device "nvmf_init_br2"
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # true
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:12:09.161  Cannot find device "nvmf_tgt_br"
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # true
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:12:09.161  Cannot find device "nvmf_tgt_br2"
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # true
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:12:09.161  Cannot find device "nvmf_br"
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # true
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:12:09.161  Cannot find device "nvmf_init_if"
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # true
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:12:09.161  Cannot find device "nvmf_init_if2"
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # true
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:12:09.161  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # true
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:12:09.161  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # true
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:12:09.161   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:12:09.420   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:12:09.420   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:12:09.420   18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:12:09.420   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:12:09.421   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:12:09.421  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:12:09.421  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms
00:12:09.421  
00:12:09.421  --- 10.0.0.3 ping statistics ---
00:12:09.421  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:09.421  rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms
00:12:09.421   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:12:09.679  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:12:09.679  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms
00:12:09.679  
00:12:09.679  --- 10.0.0.4 ping statistics ---
00:12:09.679  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:09.679  rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms
00:12:09.679   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:12:09.680  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:12:09.680  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms
00:12:09.680  
00:12:09.680  --- 10.0.0.1 ping statistics ---
00:12:09.680  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:09.680  rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms
00:12:09.680   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:12:09.680  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:12:09.680  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms
00:12:09.680  
00:12:09.680  --- 10.0.0.2 ping statistics ---
00:12:09.680  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:09.680  rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms
00:12:09.680   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:12:09.680   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@461 -- # return 0
00:12:09.680   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:12:09.680   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:12:09.680   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:12:09.680   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:12:09.680   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:12:09.680   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:12:09.680   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:12:09.680   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF
00:12:09.680   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:12:09.680   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable
00:12:09.680   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:09.680  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:09.680   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=87061
00:12:09.680   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 87061
00:12:09.680   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 87061 ']'
00:12:09.680   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:09.680   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:12:09.680   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:09.680   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:09.680   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:09.680   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:09.680  [2024-12-13 18:56:41.352739] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:12:09.680  [2024-12-13 18:56:41.352842] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:12:09.939  [2024-12-13 18:56:41.510546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:12:09.939  [2024-12-13 18:56:41.549928] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:12:09.939  [2024-12-13 18:56:41.550339] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:12:09.939  [2024-12-13 18:56:41.550513] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:12:09.939  [2024-12-13 18:56:41.550746] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:12:09.939  [2024-12-13 18:56:41.550890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:12:09.939  [2024-12-13 18:56:41.552299] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:12:09.939  [2024-12-13 18:56:41.552446] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:12:09.939  [2024-12-13 18:56:41.552519] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:12:09.939  [2024-12-13 18:56:41.552521] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:12:09.939   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:09.939   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0
00:12:09.939   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:12:09.939   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable
00:12:09.939   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:09.939   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:12:09.939   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:12:09.939   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:09.939   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:09.939  [2024-12-13 18:56:41.739271] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:12:09.939   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:09.939    18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4
00:12:09.939   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4)
00:12:09.939   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512
00:12:09.939   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:09.939   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.199  Null1
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.199  [2024-12-13 18:56:41.784025] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4)
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.199  Null2
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4)
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.199  Null3
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4)
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.199  Null4
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.3 -s 4430
00:12:10.199   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.200   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.200   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.200   18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -a 10.0.0.3 -s 4420
00:12:10.200  
00:12:10.200  Discovery Log Number of Records 6, Generation counter 6
00:12:10.200  =====Discovery Log Entry 0======
00:12:10.200  trtype:  tcp
00:12:10.200  adrfam:  ipv4
00:12:10.200  subtype: current discovery subsystem
00:12:10.200  treq:    not required
00:12:10.200  portid:  0
00:12:10.200  trsvcid: 4420
00:12:10.200  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:12:10.200  traddr:  10.0.0.3
00:12:10.200  eflags:  explicit discovery connections, duplicate discovery information
00:12:10.200  sectype: none
00:12:10.200  =====Discovery Log Entry 1======
00:12:10.200  trtype:  tcp
00:12:10.200  adrfam:  ipv4
00:12:10.200  subtype: nvme subsystem
00:12:10.200  treq:    not required
00:12:10.200  portid:  0
00:12:10.200  trsvcid: 4420
00:12:10.200  subnqn:  nqn.2016-06.io.spdk:cnode1
00:12:10.200  traddr:  10.0.0.3
00:12:10.200  eflags:  none
00:12:10.200  sectype: none
00:12:10.200  =====Discovery Log Entry 2======
00:12:10.200  trtype:  tcp
00:12:10.200  adrfam:  ipv4
00:12:10.200  subtype: nvme subsystem
00:12:10.200  treq:    not required
00:12:10.200  portid:  0
00:12:10.200  trsvcid: 4420
00:12:10.200  subnqn:  nqn.2016-06.io.spdk:cnode2
00:12:10.200  traddr:  10.0.0.3
00:12:10.200  eflags:  none
00:12:10.200  sectype: none
00:12:10.200  =====Discovery Log Entry 3======
00:12:10.200  trtype:  tcp
00:12:10.200  adrfam:  ipv4
00:12:10.200  subtype: nvme subsystem
00:12:10.200  treq:    not required
00:12:10.200  portid:  0
00:12:10.200  trsvcid: 4420
00:12:10.200  subnqn:  nqn.2016-06.io.spdk:cnode3
00:12:10.200  traddr:  10.0.0.3
00:12:10.200  eflags:  none
00:12:10.200  sectype: none
00:12:10.200  =====Discovery Log Entry 4======
00:12:10.200  trtype:  tcp
00:12:10.200  adrfam:  ipv4
00:12:10.200  subtype: nvme subsystem
00:12:10.200  treq:    not required
00:12:10.200  portid:  0
00:12:10.200  trsvcid: 4420
00:12:10.200  subnqn:  nqn.2016-06.io.spdk:cnode4
00:12:10.200  traddr:  10.0.0.3
00:12:10.200  eflags:  none
00:12:10.200  sectype: none
00:12:10.200  =====Discovery Log Entry 5======
00:12:10.200  trtype:  tcp
00:12:10.200  adrfam:  ipv4
00:12:10.200  subtype: discovery subsystem referral
00:12:10.200  treq:    not required
00:12:10.200  portid:  0
00:12:10.200  trsvcid: 4430
00:12:10.200  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:12:10.200  traddr:  10.0.0.3
00:12:10.200  eflags:  none
00:12:10.200  sectype: none
00:12:10.200  Perform nvmf subsystem discovery via RPC
00:12:10.200   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC'
00:12:10.200   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems
00:12:10.200   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.200   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.459  [
00:12:10.459  {
00:12:10.459  "allow_any_host": true,
00:12:10.459  "hosts": [],
00:12:10.459  "listen_addresses": [
00:12:10.459  {
00:12:10.459  "adrfam": "IPv4",
00:12:10.459  "traddr": "10.0.0.3",
00:12:10.459  "trsvcid": "4420",
00:12:10.459  "trtype": "TCP"
00:12:10.459  }
00:12:10.459  ],
00:12:10.459  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:12:10.459  "subtype": "Discovery"
00:12:10.459  },
00:12:10.459  {
00:12:10.459  "allow_any_host": true,
00:12:10.459  "hosts": [],
00:12:10.459  "listen_addresses": [
00:12:10.459  {
00:12:10.459  "adrfam": "IPv4",
00:12:10.459  "traddr": "10.0.0.3",
00:12:10.459  "trsvcid": "4420",
00:12:10.459  "trtype": "TCP"
00:12:10.459  }
00:12:10.459  ],
00:12:10.459  "max_cntlid": 65519,
00:12:10.459  "max_namespaces": 32,
00:12:10.459  "min_cntlid": 1,
00:12:10.459  "model_number": "SPDK bdev Controller",
00:12:10.459  "namespaces": [
00:12:10.459  {
00:12:10.459  "bdev_name": "Null1",
00:12:10.459  "name": "Null1",
00:12:10.459  "nguid": "310A271821734F71B31A96017CA9BF9A",
00:12:10.459  "nsid": 1,
00:12:10.459  "uuid": "310a2718-2173-4f71-b31a-96017ca9bf9a"
00:12:10.459  }
00:12:10.459  ],
00:12:10.459  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:12:10.459  "serial_number": "SPDK00000000000001",
00:12:10.459  "subtype": "NVMe"
00:12:10.459  },
00:12:10.459  {
00:12:10.459  "allow_any_host": true,
00:12:10.459  "hosts": [],
00:12:10.459  "listen_addresses": [
00:12:10.459  {
00:12:10.459  "adrfam": "IPv4",
00:12:10.459  "traddr": "10.0.0.3",
00:12:10.459  "trsvcid": "4420",
00:12:10.459  "trtype": "TCP"
00:12:10.459  }
00:12:10.459  ],
00:12:10.459  "max_cntlid": 65519,
00:12:10.459  "max_namespaces": 32,
00:12:10.459  "min_cntlid": 1,
00:12:10.459  "model_number": "SPDK bdev Controller",
00:12:10.459  "namespaces": [
00:12:10.459  {
00:12:10.459  "bdev_name": "Null2",
00:12:10.459  "name": "Null2",
00:12:10.459  "nguid": "C30A71C59E7B49538E22E6A76384870C",
00:12:10.459  "nsid": 1,
00:12:10.459  "uuid": "c30a71c5-9e7b-4953-8e22-e6a76384870c"
00:12:10.459  }
00:12:10.459  ],
00:12:10.459  "nqn": "nqn.2016-06.io.spdk:cnode2",
00:12:10.459  "serial_number": "SPDK00000000000002",
00:12:10.459  "subtype": "NVMe"
00:12:10.459  },
00:12:10.459  {
00:12:10.459  "allow_any_host": true,
00:12:10.459  "hosts": [],
00:12:10.459  "listen_addresses": [
00:12:10.459  {
00:12:10.459  "adrfam": "IPv4",
00:12:10.459  "traddr": "10.0.0.3",
00:12:10.459  "trsvcid": "4420",
00:12:10.459  "trtype": "TCP"
00:12:10.459  }
00:12:10.459  ],
00:12:10.459  "max_cntlid": 65519,
00:12:10.459  "max_namespaces": 32,
00:12:10.459  "min_cntlid": 1,
00:12:10.459  "model_number": "SPDK bdev Controller",
00:12:10.459  "namespaces": [
00:12:10.459  {
00:12:10.459  "bdev_name": "Null3",
00:12:10.459  "name": "Null3",
00:12:10.459  "nguid": "9AA36D783E8249EEB30ECE0398178DC0",
00:12:10.459  "nsid": 1,
00:12:10.459  "uuid": "9aa36d78-3e82-49ee-b30e-ce0398178dc0"
00:12:10.459  }
00:12:10.459  ],
00:12:10.459  "nqn": "nqn.2016-06.io.spdk:cnode3",
00:12:10.459  "serial_number": "SPDK00000000000003",
00:12:10.459  "subtype": "NVMe"
00:12:10.459  },
00:12:10.459  {
00:12:10.459  "allow_any_host": true,
00:12:10.459  "hosts": [],
00:12:10.459  "listen_addresses": [
00:12:10.459  {
00:12:10.459  "adrfam": "IPv4",
00:12:10.459  "traddr": "10.0.0.3",
00:12:10.459  "trsvcid": "4420",
00:12:10.459  "trtype": "TCP"
00:12:10.460  }
00:12:10.460  ],
00:12:10.460  "max_cntlid": 65519,
00:12:10.460  "max_namespaces": 32,
00:12:10.460  "min_cntlid": 1,
00:12:10.460  "model_number": "SPDK bdev Controller",
00:12:10.460  "namespaces": [
00:12:10.460  {
00:12:10.460  "bdev_name": "Null4",
00:12:10.460  "name": "Null4",
00:12:10.460  "nguid": "A31DE138C8F5442A8F6CC352612F5962",
00:12:10.460  "nsid": 1,
00:12:10.460  "uuid": "a31de138-c8f5-442a-8f6c-c352612f5962"
00:12:10.460  }
00:12:10.460  ],
00:12:10.460  "nqn": "nqn.2016-06.io.spdk:cnode4",
00:12:10.460  "serial_number": "SPDK00000000000004",
00:12:10.460  "subtype": "NVMe"
00:12:10.460  }
00:12:10.460  ]
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.460    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4)
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4)
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4)
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4)
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.3 -s 4430
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.460    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs
00:12:10.460    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name'
00:12:10.460    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.460    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.460    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs=
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']'
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20}
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:12:10.460  rmmod nvme_tcp
00:12:10.460  rmmod nvme_fabrics
00:12:10.460  rmmod nvme_keyring
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 87061 ']'
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 87061
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 87061 ']'
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 87061
00:12:10.460    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname
00:12:10.460   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:10.460    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87061
00:12:10.719  killing process with pid 87061
00:12:10.719   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:10.719   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:10.719   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87061'
00:12:10.719   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 87061
00:12:10.719   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 87061
00:12:10.719   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:12:10.719   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:12:10.719   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:12:10.719   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr
00:12:10.719   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save
00:12:10.719   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore
00:12:10.719   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:12:10.719   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:12:10.719   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:12:10.719   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:12:10.719   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:12:10.719   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:12:10.719   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:12:10.978   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:12:10.978   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:12:10.978   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:12:10.978   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:12:10.978   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:12:10.978   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:12:10.978   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:12:10.978   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:12:10.978   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:12:10.978   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns
00:12:10.978   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:12:10.978   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:12:10.978    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:12:10.978   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # return 0
00:12:10.978  
00:12:10.978  real	0m2.137s
00:12:10.978  user	0m4.079s
00:12:10.978  sys	0m0.703s
00:12:10.978  ************************************
00:12:10.978  END TEST nvmf_target_discovery
00:12:10.978  ************************************
00:12:10.978   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:10.978   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:12:10.978   18:56:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp
00:12:10.978   18:56:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:12:10.978   18:56:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:10.979   18:56:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:12:10.979  ************************************
00:12:10.979  START TEST nvmf_referrals
00:12:10.979  ************************************
00:12:10.979   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp
00:12:11.238  * Looking for test storage...
00:12:11.238  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:12:11.238    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:12:11.238     18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version
00:12:11.238     18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:12:11.238    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:12:11.238    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:11.238    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:11.238    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:11.238    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-:
00:12:11.238    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1
00:12:11.238    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-:
00:12:11.238    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2
00:12:11.238    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<'
00:12:11.238    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2
00:12:11.238    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1
00:12:11.238    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:11.238    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in
00:12:11.238    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1
00:12:11.238    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:11.239    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:11.239     18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1
00:12:11.239     18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1
00:12:11.239     18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:11.239     18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1
00:12:11.239    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1
00:12:11.239     18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2
00:12:11.239     18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2
00:12:11.239     18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:11.239     18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2
00:12:11.239    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2
00:12:11.239    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:11.239    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:11.239    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0
00:12:11.239    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:11.239    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:12:11.239  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:11.239  		--rc genhtml_branch_coverage=1
00:12:11.239  		--rc genhtml_function_coverage=1
00:12:11.239  		--rc genhtml_legend=1
00:12:11.239  		--rc geninfo_all_blocks=1
00:12:11.239  		--rc geninfo_unexecuted_blocks=1
00:12:11.239  		
00:12:11.239  		'
00:12:11.239    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:12:11.239  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:11.239  		--rc genhtml_branch_coverage=1
00:12:11.239  		--rc genhtml_function_coverage=1
00:12:11.239  		--rc genhtml_legend=1
00:12:11.239  		--rc geninfo_all_blocks=1
00:12:11.239  		--rc geninfo_unexecuted_blocks=1
00:12:11.239  		
00:12:11.239  		'
00:12:11.239    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:12:11.239  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:11.239  		--rc genhtml_branch_coverage=1
00:12:11.239  		--rc genhtml_function_coverage=1
00:12:11.239  		--rc genhtml_legend=1
00:12:11.239  		--rc geninfo_all_blocks=1
00:12:11.239  		--rc geninfo_unexecuted_blocks=1
00:12:11.239  		
00:12:11.239  		'
00:12:11.239    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:12:11.239  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:11.239  		--rc genhtml_branch_coverage=1
00:12:11.239  		--rc genhtml_function_coverage=1
00:12:11.239  		--rc genhtml_legend=1
00:12:11.239  		--rc geninfo_all_blocks=1
00:12:11.239  		--rc geninfo_unexecuted_blocks=1
00:12:11.239  		
00:12:11.239  		'
00:12:11.239   18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:12:11.239     18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s
00:12:11.239    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:12:11.239    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:12:11.239    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:12:11.239    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:12:11.239    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:12:11.239    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:12:11.239    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:12:11.239    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:12:11.239    18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:12:11.239     18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:12:11.239    18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:12:11.239    18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:12:11.239    18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:12:11.239    18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:12:11.239    18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:12:11.239    18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:12:11.239    18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:12:11.239     18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob
00:12:11.239     18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:11.239     18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:11.239     18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:11.239      18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:11.239      18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:11.239      18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:11.239      18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH
00:12:11.239      18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:11.239    18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0
00:12:11.239    18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:12:11.239    18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:12:11.239    18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:12:11.239    18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:12:11.239    18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:12:11.239    18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:12:11.239  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:12:11.239    18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:12:11.239    18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:12:11.239    18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0
00:12:11.239   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2
00:12:11.239   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3
00:12:11.239   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4
00:12:11.239   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430
00:12:11.239   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery
00:12:11.239   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1
00:12:11.239   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit
00:12:11.239   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:12:11.239   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:12:11.239   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs
00:12:11.239   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no
00:12:11.239   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns
00:12:11.239   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:12:11.239   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:12:11.239    18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:12:11.239   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:12:11.239   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:12:11.239   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:12:11.239   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:12:11.239   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:12:11.239   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@460 -- # nvmf_veth_init
00:12:11.239   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:12:11.239   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:12:11.239   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:12:11.240   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:12:11.240   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:12:11.240   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:12:11.240   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:12:11.240   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:12:11.240   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:12:11.240   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:12:11.240   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:12:11.240   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:12:11.240   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:12:11.240   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:12:11.240   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:12:11.240   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:12:11.240   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:12:11.240  Cannot find device "nvmf_init_br"
00:12:11.240   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true
00:12:11.240   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:12:11.240  Cannot find device "nvmf_init_br2"
00:12:11.240   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true
00:12:11.240   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:12:11.499  Cannot find device "nvmf_tgt_br"
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # true
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:12:11.499  Cannot find device "nvmf_tgt_br2"
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # true
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:12:11.499  Cannot find device "nvmf_init_br"
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # true
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:12:11.499  Cannot find device "nvmf_init_br2"
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # true
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:12:11.499  Cannot find device "nvmf_tgt_br"
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # true
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:12:11.499  Cannot find device "nvmf_tgt_br2"
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # true
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:12:11.499  Cannot find device "nvmf_br"
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # true
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:12:11.499  Cannot find device "nvmf_init_if"
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # true
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:12:11.499  Cannot find device "nvmf_init_if2"
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # true
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:12:11.499  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # true
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:12:11.499  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # true
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:12:11.499   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:12:11.758   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:12:11.758   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:12:11.758   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:12:11.758   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:12:11.758   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:12:11.758   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:12:11.758   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:12:11.758   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:12:11.758   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:12:11.758   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:12:11.758   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:12:11.758   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:12:11.758   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:12:11.758   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:12:11.758   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:12:11.758   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:12:11.758   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:12:11.758   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:12:11.758   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:12:11.758  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:12:11.758  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms
00:12:11.758  
00:12:11.758  --- 10.0.0.3 ping statistics ---
00:12:11.758  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:11.758  rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms
00:12:11.758   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:12:11.758  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:12:11.758  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms
00:12:11.758  
00:12:11.758  --- 10.0.0.4 ping statistics ---
00:12:11.758  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:11.758  rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms
00:12:11.758   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:12:11.758  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:12:11.758  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms
00:12:11.758  
00:12:11.758  --- 10.0.0.1 ping statistics ---
00:12:11.758  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:11.758  rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms
00:12:11.759   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:12:11.759  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:12:11.759  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms
00:12:11.759  
00:12:11.759  --- 10.0.0.2 ping statistics ---
00:12:11.759  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:11.759  rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms
00:12:11.759   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:12:11.759   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@461 -- # return 0
00:12:11.759   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:12:11.759   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:12:11.759   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:12:11.759   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:12:11.759   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:12:11.759   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:12:11.759   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:12:11.759   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF
00:12:11.759   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:12:11.759   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable
00:12:11.759   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:12:11.759  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:11.759   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=87327
00:12:11.759   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 87327
00:12:11.759   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 87327 ']'
00:12:11.759   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:12:11.759   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:11.759   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:11.759   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:11.759   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:11.759   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:12:11.759  [2024-12-13 18:56:43.539480] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:12:11.759  [2024-12-13 18:56:43.539832] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:12:12.018  [2024-12-13 18:56:43.697713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:12:12.018  [2024-12-13 18:56:43.738462] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:12:12.018  [2024-12-13 18:56:43.738833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:12:12.018  [2024-12-13 18:56:43.739094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:12:12.018  [2024-12-13 18:56:43.739304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:12:12.018  [2024-12-13 18:56:43.739460] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:12:12.018  [2024-12-13 18:56:43.740824] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:12:12.018  [2024-12-13 18:56:43.740971] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:12:12.018  [2024-12-13 18:56:43.741051] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:12:12.018  [2024-12-13 18:56:43.741053] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:12:12.277  [2024-12-13 18:56:43.923633] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.3 -s 8009 discovery
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:12:12.277  [2024-12-13 18:56:43.936175] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 ***
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:12:12.277   18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:12.277    18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals
00:12:12.277    18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:12.277    18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length
00:12:12.277    18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:12:12.277    18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:12.277   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 ))
00:12:12.277    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc
00:12:12.277    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]]
00:12:12.277     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr'
00:12:12.277     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals
00:12:12.277     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:12.277     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:12:12.277     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort
00:12:12.277     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:12.277    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4
00:12:12.277   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]]
00:12:12.277    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme
00:12:12.277    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:12:12.277    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:12:12.277     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -a 10.0.0.3 -s 8009 -o json
00:12:12.277     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort
00:12:12.277     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:12:12.536    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4
00:12:12.536   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]]
00:12:12.536   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430
00:12:12.536   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:12.536   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:12:12.536   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:12.536   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430
00:12:12.536   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:12.536   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:12:12.536   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:12.536   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430
00:12:12.536   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:12.536   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:12:12.536   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:12.536    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals
00:12:12.536    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length
00:12:12.536    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:12.536    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:12:12.536    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:12.536   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 ))
00:12:12.537    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme
00:12:12.537    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:12:12.537    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:12:12.537     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -a 10.0.0.3 -s 8009 -o json
00:12:12.537     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:12:12.537     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort
00:12:12.796    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo
00:12:12.796   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]]
00:12:12.796   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery
00:12:12.796   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:12.796   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:12:12.796   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:12.796   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1
00:12:12.796   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:12.796   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:12:12.796   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:12.796    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc
00:12:12.796    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]]
00:12:12.796     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals
00:12:12.796     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:12.796     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:12:12.796     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr'
00:12:12.796     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort
00:12:12.796     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:12.796    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2
00:12:12.796   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]]
00:12:12.796    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme
00:12:12.796    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:12:12.796    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:12:12.796     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:12:12.796     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -a 10.0.0.3 -s 8009 -o json
00:12:12.796     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort
00:12:13.055    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2
00:12:13.055   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]]
00:12:13.055    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem'
00:12:13.055    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem'
00:12:13.055    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn
00:12:13.055    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -a 10.0.0.3 -s 8009 -o json
00:12:13.055    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")'
00:12:13.055   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]]
00:12:13.055    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral'
00:12:13.055    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn
00:12:13.055    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral'
00:12:13.055    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -a 10.0.0.3 -s 8009 -o json
00:12:13.055    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")'
00:12:13.055   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]]
00:12:13.055   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1
00:12:13.055   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:13.055   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:12:13.055   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:13.055    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc
00:12:13.055    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]]
00:12:13.055     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals
00:12:13.055     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:13.055     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr'
00:12:13.055     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:12:13.055     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort
00:12:13.314     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:13.314    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2
00:12:13.314   18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]]
00:12:13.314    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme
00:12:13.314    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:12:13.314    18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:12:13.314     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -a 10.0.0.3 -s 8009 -o json
00:12:13.314     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:12:13.314     18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort
00:12:13.314    18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2
00:12:13.314   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]]
00:12:13.314    18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem'
00:12:13.314    18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem'
00:12:13.314    18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn
00:12:13.314    18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -a 10.0.0.3 -s 8009 -o json
00:12:13.314    18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")'
00:12:13.572   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]]
00:12:13.572    18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral'
00:12:13.572    18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral'
00:12:13.572    18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn
00:12:13.572    18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")'
00:12:13.572    18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -a 10.0.0.3 -s 8009 -o json
00:12:13.572   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]]
00:12:13.572   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery
00:12:13.572   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:13.572   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:12:13.572   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:13.572    18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals
00:12:13.572    18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length
00:12:13.572    18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:13.572    18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:12:13.572    18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:13.572   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 ))
00:12:13.572    18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme
00:12:13.572    18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:12:13.572    18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:12:13.572     18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -a 10.0.0.3 -s 8009 -o json
00:12:13.572     18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:12:13.572     18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort
00:12:13.831    18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo
00:12:13.831   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]]
00:12:13.831   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT
00:12:13.831   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini
00:12:13.831   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup
00:12:13.831   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync
00:12:13.831   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:12:13.831   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e
00:12:13.831   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20}
00:12:13.831   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:12:13.831  rmmod nvme_tcp
00:12:13.831  rmmod nvme_fabrics
00:12:13.831  rmmod nvme_keyring
00:12:13.831   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:12:13.831   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e
00:12:13.831   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0
00:12:13.831   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 87327 ']'
00:12:13.831   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 87327
00:12:13.831   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 87327 ']'
00:12:13.831   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 87327
00:12:13.831    18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname
00:12:13.831   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:13.831    18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87327
00:12:13.831  killing process with pid 87327
00:12:13.831   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:13.831   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:13.831   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87327'
00:12:13.831   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 87327
00:12:13.831   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 87327
00:12:14.090   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:12:14.090   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:12:14.090   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:12:14.090   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr
00:12:14.090   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save
00:12:14.090   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:12:14.090   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore
00:12:14.090   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:12:14.090   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:12:14.090   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:12:14.090   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:12:14.090   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:12:14.090   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:12:14.090   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:12:14.090   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:12:14.090   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:12:14.090   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:12:14.090   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:12:14.349   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:12:14.349   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:12:14.349   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:12:14.349   18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:12:14.349   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # remove_spdk_ns
00:12:14.349   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:12:14.349   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:12:14.349    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:12:14.349   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # return 0
00:12:14.349  
00:12:14.349  real	0m3.252s
00:12:14.349  user	0m9.183s
00:12:14.349  sys	0m0.983s
00:12:14.349  ************************************
00:12:14.349  END TEST nvmf_referrals
00:12:14.349  ************************************
00:12:14.349   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:14.349   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:12:14.349   18:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp
00:12:14.349   18:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:12:14.349   18:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:14.349   18:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:12:14.349  ************************************
00:12:14.349  START TEST nvmf_connect_disconnect
00:12:14.349  ************************************
00:12:14.349   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp
00:12:14.609  * Looking for test storage...
00:12:14.609  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:12:14.609    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:12:14.609     18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version
00:12:14.609     18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:12:14.609    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:12:14.609    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:14.609    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:14.609    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:14.609    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-:
00:12:14.609    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1
00:12:14.609    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-:
00:12:14.609    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2
00:12:14.609    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<'
00:12:14.609    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2
00:12:14.609    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1
00:12:14.609    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:14.609    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in
00:12:14.609    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1
00:12:14.609    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:14.609    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:14.609     18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1
00:12:14.609     18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1
00:12:14.609     18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:14.609     18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1
00:12:14.609    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1
00:12:14.609     18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2
00:12:14.609     18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2
00:12:14.609     18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:14.609     18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2
00:12:14.609    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:12:14.610  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:14.610  		--rc genhtml_branch_coverage=1
00:12:14.610  		--rc genhtml_function_coverage=1
00:12:14.610  		--rc genhtml_legend=1
00:12:14.610  		--rc geninfo_all_blocks=1
00:12:14.610  		--rc geninfo_unexecuted_blocks=1
00:12:14.610  		
00:12:14.610  		'
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:12:14.610  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:14.610  		--rc genhtml_branch_coverage=1
00:12:14.610  		--rc genhtml_function_coverage=1
00:12:14.610  		--rc genhtml_legend=1
00:12:14.610  		--rc geninfo_all_blocks=1
00:12:14.610  		--rc geninfo_unexecuted_blocks=1
00:12:14.610  		
00:12:14.610  		'
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:12:14.610  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:14.610  		--rc genhtml_branch_coverage=1
00:12:14.610  		--rc genhtml_function_coverage=1
00:12:14.610  		--rc genhtml_legend=1
00:12:14.610  		--rc geninfo_all_blocks=1
00:12:14.610  		--rc geninfo_unexecuted_blocks=1
00:12:14.610  		
00:12:14.610  		'
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:12:14.610  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:14.610  		--rc genhtml_branch_coverage=1
00:12:14.610  		--rc genhtml_function_coverage=1
00:12:14.610  		--rc genhtml_legend=1
00:12:14.610  		--rc geninfo_all_blocks=1
00:12:14.610  		--rc geninfo_unexecuted_blocks=1
00:12:14.610  		
00:12:14.610  		'
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:12:14.610     18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:12:14.610     18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:12:14.610     18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob
00:12:14.610     18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:14.610     18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:14.610     18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:14.610      18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:14.610      18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:14.610      18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:14.610      18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH
00:12:14.610      18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:12:14.610  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:12:14.610    18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@460 -- # nvmf_veth_init
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:12:14.610   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:12:14.611   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:12:14.611   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:12:14.611   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:12:14.611   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:12:14.611   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:12:14.611  Cannot find device "nvmf_init_br"
00:12:14.611   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true
00:12:14.611   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:12:14.611  Cannot find device "nvmf_init_br2"
00:12:14.611   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true
00:12:14.611   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:12:14.611  Cannot find device "nvmf_tgt_br"
00:12:14.611   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # true
00:12:14.611   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:12:14.611  Cannot find device "nvmf_tgt_br2"
00:12:14.611   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # true
00:12:14.611   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:12:14.611  Cannot find device "nvmf_init_br"
00:12:14.611   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true
00:12:14.611   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:12:14.611  Cannot find device "nvmf_init_br2"
00:12:14.611   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true
00:12:14.611   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:12:14.611  Cannot find device "nvmf_tgt_br"
00:12:14.611   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # true
00:12:14.611   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:12:14.611  Cannot find device "nvmf_tgt_br2"
00:12:14.611   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # true
00:12:14.611   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:12:14.870  Cannot find device "nvmf_br"
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # true
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:12:14.870  Cannot find device "nvmf_init_if"
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # true
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:12:14.870  Cannot find device "nvmf_init_if2"
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # true
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:12:14.870  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # true
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:12:14.870  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # true
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:12:14.870   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:12:15.129  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:12:15.129  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms
00:12:15.129  
00:12:15.129  --- 10.0.0.3 ping statistics ---
00:12:15.129  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:15.129  rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:12:15.129  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:12:15.129  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms
00:12:15.129  
00:12:15.129  --- 10.0.0.4 ping statistics ---
00:12:15.129  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:15.129  rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:12:15.129  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:12:15.129  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms
00:12:15.129  
00:12:15.129  --- 10.0.0.1 ping statistics ---
00:12:15.129  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:15.129  rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:12:15.129  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:12:15.129  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms
00:12:15.129  
00:12:15.129  --- 10.0.0.2 ping statistics ---
00:12:15.129  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:15.129  rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@461 -- # return 0
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=87672
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 87672
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 87672 ']'
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:15.129  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:15.129   18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:12:15.129  [2024-12-13 18:56:46.814775] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:12:15.130  [2024-12-13 18:56:46.814874] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:12:15.389  [2024-12-13 18:56:46.963909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:12:15.389  [2024-12-13 18:56:46.997990] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:12:15.389  [2024-12-13 18:56:46.998059] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:12:15.389  [2024-12-13 18:56:46.998070] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:12:15.389  [2024-12-13 18:56:46.998077] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:12:15.389  [2024-12-13 18:56:46.998084] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:12:15.389  [2024-12-13 18:56:46.999242] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:12:15.389  [2024-12-13 18:56:46.999293] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:12:15.389  [2024-12-13 18:56:46.999363] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:12:15.389  [2024-12-13 18:56:46.999368] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:12:15.389   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:15.389   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0
00:12:15.389   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:12:15.389   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable
00:12:15.389   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:12:15.389   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:12:15.389   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0
00:12:15.389   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:15.389   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:12:15.389  [2024-12-13 18:56:47.172603] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:12:15.389   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:15.389    18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512
00:12:15.389    18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:15.389    18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:12:15.649    18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:15.649   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0
00:12:15.649   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:12:15.649   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:15.649   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:12:15.649   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:15.649   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:12:15.649   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:15.649   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:12:15.649   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:15.649   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:12:15.649   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:15.649   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:12:15.649  [2024-12-13 18:56:47.244112] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:12:15.649   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:15.649   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']'
00:12:15.649   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100
00:12:15.649   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8'
00:12:15.649   18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x
00:12:18.185  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:20.086  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:22.620  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:25.146  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:27.045  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:29.584  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:31.484  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:33.470  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:35.998  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:38.529  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:40.432  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:42.965  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:44.868  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:47.403  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:49.306  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:51.840  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:53.743  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:56.276  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:12:58.179  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:00.711  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:02.613  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:05.191  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:07.093  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:09.629  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:12.162  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:14.065  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:16.598  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:18.502  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:21.036  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:22.940  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:25.470  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:27.373  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:29.913  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:31.962  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:34.496  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:36.400  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:38.932  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:40.835  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:43.368  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:45.270  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:47.803  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:49.706  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:52.237  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:54.140  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:56.672  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:13:59.205  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:01.150  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:03.687  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:06.219  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:08.122  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:10.654  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:12.558  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:15.088  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:16.991  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:19.524  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:21.426  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:23.961  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:25.872  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:28.404  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:30.307  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:32.839  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:34.784  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:37.315  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:39.847  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:41.749  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:44.280  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:46.182  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:48.715  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:50.619  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:53.159  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:55.062  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:57.594  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:59.497  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:02.030  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:03.934  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:06.505  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:08.408  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:10.939  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:12.841  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:15.374  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:17.275  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:19.808  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:21.712  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:24.244  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:26.147  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:28.679  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:31.210  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:33.111  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:35.643  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:37.568  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:40.163  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:42.067  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:44.599  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:46.501  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:49.032  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:50.934  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:53.469  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:55.372  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:57.903  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:59.805  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:59.805   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT
00:15:59.805   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini
00:15:59.805   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup
00:15:59.805   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync
00:15:59.805   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:15:59.806   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e
00:15:59.806   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20}
00:15:59.806   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:15:59.806  rmmod nvme_tcp
00:15:59.806  rmmod nvme_fabrics
00:15:59.806  rmmod nvme_keyring
00:15:59.806   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:15:59.806   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e
00:15:59.806   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0
00:15:59.806   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 87672 ']'
00:15:59.806   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 87672
00:15:59.806   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 87672 ']'
00:15:59.806   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 87672
00:15:59.806    19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname
00:15:59.806   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:59.806    19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87672
00:16:00.065   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:16:00.065   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:16:00.065  killing process with pid 87672
00:16:00.065   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87672'
00:16:00.065   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 87672
00:16:00.065   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 87672
00:16:00.065   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:16:00.065   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:16:00.065   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:16:00.065   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr
00:16:00.065   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save
00:16:00.065   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:16:00.065   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore
00:16:00.065   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:16:00.065   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:16:00.065   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:16:00.065   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:16:00.324   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:16:00.324   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:16:00.324   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:16:00.324   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:16:00.324   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:16:00.324   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:16:00.324   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:16:00.324   19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:16:00.324   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:16:00.324   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:16:00.324   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:16:00.324   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # remove_spdk_ns
00:16:00.324   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:00.324   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:16:00.324    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:00.324   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # return 0
00:16:00.324  
00:16:00.324  real	3m46.008s
00:16:00.324  user	14m37.374s
00:16:00.324  sys	0m24.900s
00:16:00.324   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:00.324   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:16:00.324  ************************************
00:16:00.324  END TEST nvmf_connect_disconnect
00:16:00.324  ************************************
00:16:00.584   19:00:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp
00:16:00.584   19:00:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:16:00.584   19:00:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:00.584   19:00:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:16:00.584  ************************************
00:16:00.584  START TEST nvmf_multitarget
00:16:00.584  ************************************
00:16:00.584   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp
00:16:00.584  * Looking for test storage...
00:16:00.584  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:16:00.584     19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version
00:16:00.584     19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-:
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-:
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<'
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 ))
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:00.584     19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1
00:16:00.584     19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1
00:16:00.584     19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:00.584     19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1
00:16:00.584     19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2
00:16:00.584     19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2
00:16:00.584     19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:00.584     19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:16:00.584  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:00.584  		--rc genhtml_branch_coverage=1
00:16:00.584  		--rc genhtml_function_coverage=1
00:16:00.584  		--rc genhtml_legend=1
00:16:00.584  		--rc geninfo_all_blocks=1
00:16:00.584  		--rc geninfo_unexecuted_blocks=1
00:16:00.584  		
00:16:00.584  		'
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:16:00.584  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:00.584  		--rc genhtml_branch_coverage=1
00:16:00.584  		--rc genhtml_function_coverage=1
00:16:00.584  		--rc genhtml_legend=1
00:16:00.584  		--rc geninfo_all_blocks=1
00:16:00.584  		--rc geninfo_unexecuted_blocks=1
00:16:00.584  		
00:16:00.584  		'
00:16:00.584    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:16:00.584  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:00.584  		--rc genhtml_branch_coverage=1
00:16:00.585  		--rc genhtml_function_coverage=1
00:16:00.585  		--rc genhtml_legend=1
00:16:00.585  		--rc geninfo_all_blocks=1
00:16:00.585  		--rc geninfo_unexecuted_blocks=1
00:16:00.585  		
00:16:00.585  		'
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:16:00.585  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:00.585  		--rc genhtml_branch_coverage=1
00:16:00.585  		--rc genhtml_function_coverage=1
00:16:00.585  		--rc genhtml_legend=1
00:16:00.585  		--rc geninfo_all_blocks=1
00:16:00.585  		--rc geninfo_unexecuted_blocks=1
00:16:00.585  		
00:16:00.585  		'
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:16:00.585     19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:16:00.585     19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:16:00.585     19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob
00:16:00.585     19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:16:00.585     19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:16:00.585     19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:16:00.585      19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:00.585      19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:00.585      19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:00.585      19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH
00:16:00.585      19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:16:00.585  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:16:00.585    19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@460 -- # nvmf_veth_init
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:16:00.585   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:16:00.586   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:16:00.586   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:16:00.586  Cannot find device "nvmf_init_br"
00:16:00.586   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true
00:16:00.586   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:16:00.844  Cannot find device "nvmf_init_br2"
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:16:00.845  Cannot find device "nvmf_tgt_br"
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # true
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:16:00.845  Cannot find device "nvmf_tgt_br2"
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # true
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:16:00.845  Cannot find device "nvmf_init_br"
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # true
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:16:00.845  Cannot find device "nvmf_init_br2"
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # true
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:16:00.845  Cannot find device "nvmf_tgt_br"
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # true
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:16:00.845  Cannot find device "nvmf_tgt_br2"
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # true
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:16:00.845  Cannot find device "nvmf_br"
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # true
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:16:00.845  Cannot find device "nvmf_init_if"
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # true
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:16:00.845  Cannot find device "nvmf_init_if2"
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # true
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:16:00.845  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # true
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:16:00.845  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # true
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:16:00.845   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:16:01.104   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:16:01.104   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:16:01.104   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:16:01.104   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:16:01.104   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:16:01.104   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:16:01.104   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:16:01.104   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:16:01.104   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:16:01.104   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:16:01.104   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:16:01.104   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:16:01.104   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:16:01.104   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:16:01.104   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:16:01.104   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:16:01.104   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:16:01.104   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:16:01.104   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:16:01.104  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:16:01.104  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms
00:16:01.104  
00:16:01.104  --- 10.0.0.3 ping statistics ---
00:16:01.104  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:01.104  rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms
00:16:01.104   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:16:01.104  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:16:01.104  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms
00:16:01.104  
00:16:01.104  --- 10.0.0.4 ping statistics ---
00:16:01.104  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:01.104  rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:16:01.105  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:16:01.105  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms
00:16:01.105  
00:16:01.105  --- 10.0.0.1 ping statistics ---
00:16:01.105  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:01.105  rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:16:01.105  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:16:01.105  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms
00:16:01.105  
00:16:01.105  --- 10.0.0.2 ping statistics ---
00:16:01.105  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:01.105  rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@461 -- # return 0
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=91471
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 91471
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 91471 ']'
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:01.105  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable
00:16:01.105   19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x
00:16:01.105  [2024-12-13 19:00:32.878011] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:16:01.105  [2024-12-13 19:00:32.878107] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:16:01.364  [2024-12-13 19:00:33.034527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:16:01.364  [2024-12-13 19:00:33.073215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:16:01.364  [2024-12-13 19:00:33.073553] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:16:01.364  [2024-12-13 19:00:33.073709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:16:01.364  [2024-12-13 19:00:33.073927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:16:01.364  [2024-12-13 19:00:33.073945] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:16:01.364  [2024-12-13 19:00:33.075322] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:16:01.364  [2024-12-13 19:00:33.075438] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:16:01.364  [2024-12-13 19:00:33.075510] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:01.364  [2024-12-13 19:00:33.075507] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:16:01.623   19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:01.623   19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0
00:16:01.623   19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:16:01.623   19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable
00:16:01.623   19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x
00:16:01.623   19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:16:01.623   19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT
00:16:01.623    19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets
00:16:01.623    19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length
00:16:01.623   19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']'
00:16:01.623   19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32
00:16:01.882  "nvmf_tgt_1"
00:16:01.882   19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32
00:16:01.882  "nvmf_tgt_2"
00:16:01.882    19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets
00:16:01.883    19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length
00:16:02.141   19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']'
00:16:02.141   19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1
00:16:02.141  true
00:16:02.141   19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2
00:16:02.400  true
00:16:02.400    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets
00:16:02.400    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length
00:16:02.659   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']'
00:16:02.659   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:16:02.659   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini
00:16:02.659   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup
00:16:02.659   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync
00:16:02.659   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:16:02.659   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e
00:16:02.659   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20}
00:16:02.659   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:16:02.659  rmmod nvme_tcp
00:16:02.659  rmmod nvme_fabrics
00:16:02.659  rmmod nvme_keyring
00:16:02.659   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:16:02.659   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e
00:16:02.659   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0
00:16:02.659   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 91471 ']'
00:16:02.659   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 91471
00:16:02.659   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 91471 ']'
00:16:02.659   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 91471
00:16:02.659    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname
00:16:02.659   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:02.659    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91471
00:16:02.659  killing process with pid 91471
00:16:02.659   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:16:02.659   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:16:02.659   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91471'
00:16:02.659   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 91471
00:16:02.659   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 91471
00:16:02.918   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:16:02.918   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:16:02.918   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:16:02.918   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr
00:16:02.918   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save
00:16:02.918   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:16:02.918   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore
00:16:02.918   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:16:02.918   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:16:02.918   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:16:02.918   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:16:02.918   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:16:02.918   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:16:02.918   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:16:02.918   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:16:02.918   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:16:02.918   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:16:02.918   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:16:02.918   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:16:02.918   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:16:02.919   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:16:02.919   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:16:02.919   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # remove_spdk_ns
00:16:02.919   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:02.919   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:16:02.919    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:03.179   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # return 0
00:16:03.179  ************************************
00:16:03.179  END TEST nvmf_multitarget
00:16:03.179  ************************************
00:16:03.179  
00:16:03.179  real	0m2.602s
00:16:03.179  user	0m7.144s
00:16:03.179  sys	0m0.751s
00:16:03.179   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:03.179   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x
00:16:03.179   19:00:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp
00:16:03.179   19:00:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:16:03.179   19:00:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:03.179   19:00:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:16:03.179  ************************************
00:16:03.179  START TEST nvmf_rpc
00:16:03.179  ************************************
00:16:03.179   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp
00:16:03.179  * Looking for test storage...
00:16:03.179  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:16:03.179     19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:16:03.179     19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:03.179     19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1
00:16:03.179     19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1
00:16:03.179     19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:03.179     19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:16:03.179     19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2
00:16:03.179     19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2
00:16:03.179     19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:03.179     19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:16:03.179  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:03.179  		--rc genhtml_branch_coverage=1
00:16:03.179  		--rc genhtml_function_coverage=1
00:16:03.179  		--rc genhtml_legend=1
00:16:03.179  		--rc geninfo_all_blocks=1
00:16:03.179  		--rc geninfo_unexecuted_blocks=1
00:16:03.179  		
00:16:03.179  		'
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:16:03.179  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:03.179  		--rc genhtml_branch_coverage=1
00:16:03.179  		--rc genhtml_function_coverage=1
00:16:03.179  		--rc genhtml_legend=1
00:16:03.179  		--rc geninfo_all_blocks=1
00:16:03.179  		--rc geninfo_unexecuted_blocks=1
00:16:03.179  		
00:16:03.179  		'
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:16:03.179  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:03.179  		--rc genhtml_branch_coverage=1
00:16:03.179  		--rc genhtml_function_coverage=1
00:16:03.179  		--rc genhtml_legend=1
00:16:03.179  		--rc geninfo_all_blocks=1
00:16:03.179  		--rc geninfo_unexecuted_blocks=1
00:16:03.179  		
00:16:03.179  		'
00:16:03.179    19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:16:03.179  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:03.179  		--rc genhtml_branch_coverage=1
00:16:03.179  		--rc genhtml_function_coverage=1
00:16:03.179  		--rc genhtml_legend=1
00:16:03.179  		--rc geninfo_all_blocks=1
00:16:03.179  		--rc geninfo_unexecuted_blocks=1
00:16:03.179  		
00:16:03.179  		'
00:16:03.179   19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:16:03.179     19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s
00:16:03.443    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:16:03.443    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:16:03.443    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:16:03.443    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:16:03.443    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:16:03.443    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:16:03.443    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:16:03.443    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:16:03.443    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:16:03.443     19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:16:03.443    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:16:03.443    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:16:03.443    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:16:03.443    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:16:03.443    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:16:03.443    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:16:03.443    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:16:03.443     19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob
00:16:03.443     19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:16:03.443     19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:16:03.443     19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:16:03.443      19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:03.443      19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:03.444      19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:03.444      19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH
00:16:03.444      19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:03.444    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0
00:16:03.444    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:16:03.444    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:16:03.444    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:16:03.444    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:16:03.444    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:16:03.444    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:16:03.444  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:16:03.444    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:16:03.444    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:16:03.444    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:16:03.444    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@460 -- # nvmf_veth_init
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:16:03.444  Cannot find device "nvmf_init_br"
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:16:03.444  Cannot find device "nvmf_init_br2"
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:16:03.444  Cannot find device "nvmf_tgt_br"
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # true
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:16:03.444  Cannot find device "nvmf_tgt_br2"
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # true
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:16:03.444  Cannot find device "nvmf_init_br"
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # true
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:16:03.444  Cannot find device "nvmf_init_br2"
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # true
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:16:03.444  Cannot find device "nvmf_tgt_br"
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # true
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:16:03.444  Cannot find device "nvmf_tgt_br2"
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # true
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:16:03.444  Cannot find device "nvmf_br"
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # true
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:16:03.444  Cannot find device "nvmf_init_if"
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # true
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:16:03.444  Cannot find device "nvmf_init_if2"
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # true
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:16:03.444  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # true
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:16:03.444  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # true
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:16:03.444   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:16:03.703  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:16:03.703  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms
00:16:03.703  
00:16:03.703  --- 10.0.0.3 ping statistics ---
00:16:03.703  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:03.703  rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:16:03.703  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:16:03.703  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms
00:16:03.703  
00:16:03.703  --- 10.0.0.4 ping statistics ---
00:16:03.703  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:03.703  rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:16:03.703  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:16:03.703  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms
00:16:03.703  
00:16:03.703  --- 10.0.0.1 ping statistics ---
00:16:03.703  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:03.703  rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:16:03.703  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:16:03.703  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms
00:16:03.703  
00:16:03.703  --- 10.0.0.2 ping statistics ---
00:16:03.703  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:03.703  rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@461 -- # return 0
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=91741
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 91741
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 91741 ']'
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:16:03.703  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:16:03.703   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:03.703  [2024-12-13 19:00:35.511347] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:16:03.703  [2024-12-13 19:00:35.511441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:16:03.961  [2024-12-13 19:00:35.659501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:16:03.961  [2024-12-13 19:00:35.693037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:16:03.961  [2024-12-13 19:00:35.693113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:16:03.961  [2024-12-13 19:00:35.693139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:16:03.961  [2024-12-13 19:00:35.693146] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:16:03.961  [2024-12-13 19:00:35.693153] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:16:03.961  [2024-12-13 19:00:35.694362] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:16:03.961  [2024-12-13 19:00:35.694463] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:16:03.961  [2024-12-13 19:00:35.694580] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:16:03.961  [2024-12-13 19:00:35.694584] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:04.220   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:04.220   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0
00:16:04.220   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:16:04.220   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable
00:16:04.220   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:04.220   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:16:04.220    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats
00:16:04.220    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:04.221    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:04.221    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:04.221   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{
00:16:04.221  "poll_groups": [
00:16:04.221  {
00:16:04.221  "admin_qpairs": 0,
00:16:04.221  "completed_nvme_io": 0,
00:16:04.221  "current_admin_qpairs": 0,
00:16:04.221  "current_io_qpairs": 0,
00:16:04.221  "io_qpairs": 0,
00:16:04.221  "name": "nvmf_tgt_poll_group_000",
00:16:04.221  "pending_bdev_io": 0,
00:16:04.221  "transports": []
00:16:04.221  },
00:16:04.221  {
00:16:04.221  "admin_qpairs": 0,
00:16:04.221  "completed_nvme_io": 0,
00:16:04.221  "current_admin_qpairs": 0,
00:16:04.221  "current_io_qpairs": 0,
00:16:04.221  "io_qpairs": 0,
00:16:04.221  "name": "nvmf_tgt_poll_group_001",
00:16:04.221  "pending_bdev_io": 0,
00:16:04.221  "transports": []
00:16:04.221  },
00:16:04.221  {
00:16:04.221  "admin_qpairs": 0,
00:16:04.221  "completed_nvme_io": 0,
00:16:04.221  "current_admin_qpairs": 0,
00:16:04.221  "current_io_qpairs": 0,
00:16:04.221  "io_qpairs": 0,
00:16:04.221  "name": "nvmf_tgt_poll_group_002",
00:16:04.221  "pending_bdev_io": 0,
00:16:04.221  "transports": []
00:16:04.221  },
00:16:04.221  {
00:16:04.221  "admin_qpairs": 0,
00:16:04.221  "completed_nvme_io": 0,
00:16:04.221  "current_admin_qpairs": 0,
00:16:04.221  "current_io_qpairs": 0,
00:16:04.221  "io_qpairs": 0,
00:16:04.221  "name": "nvmf_tgt_poll_group_003",
00:16:04.221  "pending_bdev_io": 0,
00:16:04.221  "transports": []
00:16:04.221  }
00:16:04.221  ],
00:16:04.221  "tick_rate": 2200000000
00:16:04.221  }'
00:16:04.221    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name'
00:16:04.221    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name'
00:16:04.221    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name'
00:16:04.221    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l
00:16:04.221   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 ))
00:16:04.221    19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]'
00:16:04.221   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]]
00:16:04.221   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:16:04.221   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:04.221   19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:04.221  [2024-12-13 19:00:35.992417] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:16:04.221   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:04.221    19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats
00:16:04.221    19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:04.221    19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:04.221    19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:04.221   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{
00:16:04.221  "poll_groups": [
00:16:04.221  {
00:16:04.221  "admin_qpairs": 0,
00:16:04.221  "completed_nvme_io": 0,
00:16:04.221  "current_admin_qpairs": 0,
00:16:04.221  "current_io_qpairs": 0,
00:16:04.221  "io_qpairs": 0,
00:16:04.221  "name": "nvmf_tgt_poll_group_000",
00:16:04.221  "pending_bdev_io": 0,
00:16:04.221  "transports": [
00:16:04.221  {
00:16:04.221  "trtype": "TCP"
00:16:04.221  }
00:16:04.221  ]
00:16:04.221  },
00:16:04.221  {
00:16:04.221  "admin_qpairs": 0,
00:16:04.221  "completed_nvme_io": 0,
00:16:04.221  "current_admin_qpairs": 0,
00:16:04.221  "current_io_qpairs": 0,
00:16:04.221  "io_qpairs": 0,
00:16:04.221  "name": "nvmf_tgt_poll_group_001",
00:16:04.221  "pending_bdev_io": 0,
00:16:04.221  "transports": [
00:16:04.221  {
00:16:04.221  "trtype": "TCP"
00:16:04.221  }
00:16:04.221  ]
00:16:04.221  },
00:16:04.221  {
00:16:04.221  "admin_qpairs": 0,
00:16:04.221  "completed_nvme_io": 0,
00:16:04.221  "current_admin_qpairs": 0,
00:16:04.221  "current_io_qpairs": 0,
00:16:04.221  "io_qpairs": 0,
00:16:04.221  "name": "nvmf_tgt_poll_group_002",
00:16:04.221  "pending_bdev_io": 0,
00:16:04.221  "transports": [
00:16:04.221  {
00:16:04.221  "trtype": "TCP"
00:16:04.221  }
00:16:04.221  ]
00:16:04.221  },
00:16:04.221  {
00:16:04.221  "admin_qpairs": 0,
00:16:04.221  "completed_nvme_io": 0,
00:16:04.221  "current_admin_qpairs": 0,
00:16:04.221  "current_io_qpairs": 0,
00:16:04.221  "io_qpairs": 0,
00:16:04.221  "name": "nvmf_tgt_poll_group_003",
00:16:04.221  "pending_bdev_io": 0,
00:16:04.221  "transports": [
00:16:04.221  {
00:16:04.221  "trtype": "TCP"
00:16:04.221  }
00:16:04.221  ]
00:16:04.221  }
00:16:04.221  ],
00:16:04.221  "tick_rate": 2200000000
00:16:04.221  }'
00:16:04.221    19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs'
00:16:04.221    19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs'
00:16:04.221    19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs'
00:16:04.221    19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}'
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 ))
00:16:04.480    19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs'
00:16:04.480    19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs'
00:16:04.480    19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs'
00:16:04.480    19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}'
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 ))
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']'
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:04.480  Malloc1
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:04.480   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:04.481  [2024-12-13 19:00:36.205869] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:16:04.481   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:04.481   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -a 10.0.0.3 -s 4420
00:16:04.481   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0
00:16:04.481   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -a 10.0.0.3 -s 4420
00:16:04.481   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme
00:16:04.481   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:04.481    19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme
00:16:04.481   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:04.481    19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme
00:16:04.481   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:04.481   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme
00:16:04.481   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]]
00:16:04.481   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -a 10.0.0.3 -s 4420
00:16:04.481  [2024-12-13 19:00:36.230315] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a'
00:16:04.481  Failed to write to /dev/nvme-fabrics: Input/output error
00:16:04.481  could not add new controller: failed to write to nvme-fabrics device
00:16:04.481   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1
00:16:04.481   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:04.481   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:04.481   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:04.481   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:16:04.481   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:04.481   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:04.481   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:04.481   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420
00:16:04.739   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME
00:16:04.739   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:16:04.739   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:16:04.739   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:16:04.739   19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:16:06.683   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:16:06.683    19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:16:06.683    19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:16:06.683   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:16:06.683   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:16:06.683   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:16:06.683   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:16:06.683  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:06.683   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:16:06.683   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:16:06.683   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:16:06.683   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:06.683   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:16:06.683   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:06.683   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:16:06.683   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:16:06.683   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:06.683   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:06.942   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:06.942   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420
00:16:06.942   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0
00:16:06.942   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420
00:16:06.943   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme
00:16:06.943   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:06.943    19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme
00:16:06.943   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:06.943    19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme
00:16:06.943   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:06.943   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme
00:16:06.943   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]]
00:16:06.943   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420
00:16:06.943  [2024-12-13 19:00:38.531680] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a'
00:16:06.943  Failed to write to /dev/nvme-fabrics: Input/output error
00:16:06.943  could not add new controller: failed to write to nvme-fabrics device
00:16:06.943   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1
00:16:06.943   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:06.943   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:06.943   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:06.943   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1
00:16:06.943   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:06.943   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:06.943   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:06.943   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420
00:16:06.943   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME
00:16:06.943   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:16:06.943   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:16:06.943   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:16:06.943   19:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:16:09.476    19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:16:09.476    19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:16:09.476  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:09.476    19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:09.476  [2024-12-13 19:00:40.843788] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:09.476   19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420
00:16:09.476   19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:16:09.477   19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:16:09.477   19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:16:09.477   19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:16:09.477   19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:16:11.380    19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:16:11.380    19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:16:11.380  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:11.380  [2024-12-13 19:00:43.158650] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:16:11.380   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:11.381   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:11.381   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:11.381   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420
00:16:11.639   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:16:11.639   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:16:11.639   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:16:11.639   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:16:11.639   19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:16:13.542   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:16:13.542    19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:16:13.542    19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:16:13.801  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:13.801  [2024-12-13 19:00:45.461577] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:13.801   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420
00:16:14.060   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:16:14.060   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:16:14.060   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:16:14.060   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:16:14.060   19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:16:15.962    19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:16:15.962    19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:16:15.962  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:15.962  [2024-12-13 19:00:47.776638] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:15.962   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:16.221   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:16.221   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:16:16.221   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:16.221   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:16.221   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:16.221   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420
00:16:16.221   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:16:16.221   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:16:16.221   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:16:16.221   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:16:16.221   19:00:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:16:18.754   19:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:16:18.754    19:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:16:18.754    19:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:16:18.754   19:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:16:18.754   19:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:16:18.754   19:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:16:18.754   19:00:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:16:18.755  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:18.755  [2024-12-13 19:00:50.097000] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:16:18.755   19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:16:20.659   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:16:20.659    19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:16:20.659    19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:16:20.659   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:16:20.659   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:16:20.659   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:16:20.659   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:16:20.659  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:20.659   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:16:20.659   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.660    19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.660  [2024-12-13 19:00:52.408035] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.660  [2024-12-13 19:00:52.460036] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.660   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.919   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.920  [2024-12-13 19:00:52.516113] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.920  [2024-12-13 19:00:52.568146] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.920  [2024-12-13 19:00:52.620206] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.920    19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats
00:16:20.920    19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.920    19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.920    19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.920   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{
00:16:20.920  "poll_groups": [
00:16:20.920  {
00:16:20.920  "admin_qpairs": 2,
00:16:20.920  "completed_nvme_io": 115,
00:16:20.920  "current_admin_qpairs": 0,
00:16:20.920  "current_io_qpairs": 0,
00:16:20.920  "io_qpairs": 16,
00:16:20.920  "name": "nvmf_tgt_poll_group_000",
00:16:20.920  "pending_bdev_io": 0,
00:16:20.920  "transports": [
00:16:20.920  {
00:16:20.920  "trtype": "TCP"
00:16:20.920  }
00:16:20.920  ]
00:16:20.920  },
00:16:20.920  {
00:16:20.920  "admin_qpairs": 3,
00:16:20.920  "completed_nvme_io": 68,
00:16:20.920  "current_admin_qpairs": 0,
00:16:20.920  "current_io_qpairs": 0,
00:16:20.920  "io_qpairs": 17,
00:16:20.920  "name": "nvmf_tgt_poll_group_001",
00:16:20.920  "pending_bdev_io": 0,
00:16:20.920  "transports": [
00:16:20.920  {
00:16:20.920  "trtype": "TCP"
00:16:20.920  }
00:16:20.920  ]
00:16:20.920  },
00:16:20.920  {
00:16:20.920  "admin_qpairs": 1,
00:16:20.920  "completed_nvme_io": 119,
00:16:20.920  "current_admin_qpairs": 0,
00:16:20.920  "current_io_qpairs": 0,
00:16:20.920  "io_qpairs": 19,
00:16:20.920  "name": "nvmf_tgt_poll_group_002",
00:16:20.920  "pending_bdev_io": 0,
00:16:20.920  "transports": [
00:16:20.920  {
00:16:20.920  "trtype": "TCP"
00:16:20.920  }
00:16:20.920  ]
00:16:20.920  },
00:16:20.920  {
00:16:20.920  "admin_qpairs": 1,
00:16:20.920  "completed_nvme_io": 118,
00:16:20.920  "current_admin_qpairs": 0,
00:16:20.920  "current_io_qpairs": 0,
00:16:20.920  "io_qpairs": 18,
00:16:20.920  "name": "nvmf_tgt_poll_group_003",
00:16:20.920  "pending_bdev_io": 0,
00:16:20.920  "transports": [
00:16:20.920  {
00:16:20.920  "trtype": "TCP"
00:16:20.920  }
00:16:20.920  ]
00:16:20.920  }
00:16:20.920  ],
00:16:20.920  "tick_rate": 2200000000
00:16:20.920  }'
00:16:20.921    19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs'
00:16:20.921    19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs'
00:16:20.921    19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs'
00:16:20.921    19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}'
00:16:20.921   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 ))
00:16:20.921    19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs'
00:16:20.921    19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs'
00:16:20.921    19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs'
00:16:20.921    19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}'
00:16:21.180   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 ))
00:16:21.180   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']'
00:16:21.180   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT
00:16:21.180   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini
00:16:21.180   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup
00:16:21.180   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync
00:16:21.180   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:16:21.180   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e
00:16:21.180   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20}
00:16:21.180   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:16:21.180  rmmod nvme_tcp
00:16:21.180  rmmod nvme_fabrics
00:16:21.180  rmmod nvme_keyring
00:16:21.180   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:16:21.180   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e
00:16:21.180   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0
00:16:21.180   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 91741 ']'
00:16:21.180   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 91741
00:16:21.180   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 91741 ']'
00:16:21.180   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 91741
00:16:21.180    19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname
00:16:21.180   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:21.180    19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91741
00:16:21.180  killing process with pid 91741
00:16:21.180   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:16:21.180   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:16:21.180   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91741'
00:16:21.180   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 91741
00:16:21.180   19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 91741
00:16:21.439   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:16:21.439   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:16:21.439   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:16:21.439   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr
00:16:21.439   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:16:21.439   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save
00:16:21.439   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore
00:16:21.439   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:16:21.439   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:16:21.439   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:16:21.439   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:16:21.439   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:16:21.439   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:16:21.439   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:16:21.439   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:16:21.439   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:16:21.439   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:16:21.439   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:16:21.439   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:16:21.697   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:16:21.697   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:16:21.698   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:16:21.698   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # remove_spdk_ns
00:16:21.698   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:21.698   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:16:21.698    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:21.698   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # return 0
00:16:21.698  
00:16:21.698  real	0m18.548s
00:16:21.698  user	1m8.544s
00:16:21.698  sys	0m2.657s
00:16:21.698   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:21.698  ************************************
00:16:21.698   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:21.698  END TEST nvmf_rpc
00:16:21.698  ************************************
00:16:21.698   19:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp
00:16:21.698   19:00:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:16:21.698   19:00:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:21.698   19:00:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:16:21.698  ************************************
00:16:21.698  START TEST nvmf_invalid
00:16:21.698  ************************************
00:16:21.698   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp
00:16:21.698  * Looking for test storage...
00:16:21.698  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:16:21.698    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:16:21.698     19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version
00:16:21.698     19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-:
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-:
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<'
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 ))
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:21.958     19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1
00:16:21.958     19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1
00:16:21.958     19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:21.958     19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1
00:16:21.958     19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2
00:16:21.958     19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2
00:16:21.958     19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:21.958     19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:16:21.958  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:21.958  		--rc genhtml_branch_coverage=1
00:16:21.958  		--rc genhtml_function_coverage=1
00:16:21.958  		--rc genhtml_legend=1
00:16:21.958  		--rc geninfo_all_blocks=1
00:16:21.958  		--rc geninfo_unexecuted_blocks=1
00:16:21.958  		
00:16:21.958  		'
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:16:21.958  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:21.958  		--rc genhtml_branch_coverage=1
00:16:21.958  		--rc genhtml_function_coverage=1
00:16:21.958  		--rc genhtml_legend=1
00:16:21.958  		--rc geninfo_all_blocks=1
00:16:21.958  		--rc geninfo_unexecuted_blocks=1
00:16:21.958  		
00:16:21.958  		'
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:16:21.958  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:21.958  		--rc genhtml_branch_coverage=1
00:16:21.958  		--rc genhtml_function_coverage=1
00:16:21.958  		--rc genhtml_legend=1
00:16:21.958  		--rc geninfo_all_blocks=1
00:16:21.958  		--rc geninfo_unexecuted_blocks=1
00:16:21.958  		
00:16:21.958  		'
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:16:21.958  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:21.958  		--rc genhtml_branch_coverage=1
00:16:21.958  		--rc genhtml_function_coverage=1
00:16:21.958  		--rc genhtml_legend=1
00:16:21.958  		--rc geninfo_all_blocks=1
00:16:21.958  		--rc geninfo_unexecuted_blocks=1
00:16:21.958  		
00:16:21.958  		'
00:16:21.958   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:16:21.958     19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:16:21.958     19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:16:21.958     19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob
00:16:21.958     19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:16:21.958     19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:16:21.958     19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:16:21.958      19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:21.958      19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:21.958      19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:21.958      19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH
00:16:21.958      19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:16:21.958    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:16:21.959  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:16:21.959    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:16:21.959    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:16:21.959    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:16:21.959    19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@460 -- # nvmf_veth_init
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:16:21.959  Cannot find device "nvmf_init_br"
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:16:21.959  Cannot find device "nvmf_init_br2"
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:16:21.959  Cannot find device "nvmf_tgt_br"
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # true
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:16:21.959  Cannot find device "nvmf_tgt_br2"
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # true
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:16:21.959  Cannot find device "nvmf_init_br"
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # true
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:16:21.959  Cannot find device "nvmf_init_br2"
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # true
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:16:21.959  Cannot find device "nvmf_tgt_br"
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # true
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:16:21.959  Cannot find device "nvmf_tgt_br2"
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # true
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:16:21.959  Cannot find device "nvmf_br"
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # true
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:16:21.959  Cannot find device "nvmf_init_if"
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # true
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:16:21.959  Cannot find device "nvmf_init_if2"
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # true
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:16:21.959  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # true
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:16:21.959  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # true
00:16:21.959   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:16:22.219   19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:16:22.219   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:16:22.219   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:16:22.219   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:16:22.219  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:16:22.219  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms
00:16:22.219  
00:16:22.219  --- 10.0.0.3 ping statistics ---
00:16:22.219  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:22.219  rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms
00:16:22.219   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:16:22.219  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:16:22.219  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms
00:16:22.219  
00:16:22.219  --- 10.0.0.4 ping statistics ---
00:16:22.219  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:22.219  rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms
00:16:22.219   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:16:22.219  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:16:22.219  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms
00:16:22.219  
00:16:22.219  --- 10.0.0.1 ping statistics ---
00:16:22.219  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:22.219  rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms
00:16:22.219   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:16:22.219  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:16:22.219  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms
00:16:22.219  
00:16:22.219  --- 10.0.0.2 ping statistics ---
00:16:22.219  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:22.219  rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms
00:16:22.219   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:16:22.219   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@461 -- # return 0
00:16:22.219   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:16:22.219   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:16:22.219   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:16:22.219   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:16:22.219   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:16:22.219   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:16:22.219   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:16:22.478   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF
00:16:22.478   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:16:22.478   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable
00:16:22.478   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x
00:16:22.478   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=92284
00:16:22.478   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:16:22.478   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 92284
00:16:22.478   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 92284 ']'
00:16:22.478   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:22.478   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100
00:16:22.478   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:22.478  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:22.478   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable
00:16:22.478   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x
00:16:22.478  [2024-12-13 19:00:54.100710] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:16:22.478  [2024-12-13 19:00:54.100794] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:16:22.478  [2024-12-13 19:00:54.244617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:16:22.478  [2024-12-13 19:00:54.281198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:16:22.478  [2024-12-13 19:00:54.281289] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:16:22.478  [2024-12-13 19:00:54.281300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:16:22.478  [2024-12-13 19:00:54.281307] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:16:22.478  [2024-12-13 19:00:54.281314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:16:22.478  [2024-12-13 19:00:54.282519] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:16:22.478  [2024-12-13 19:00:54.282583] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:16:22.478  [2024-12-13 19:00:54.282700] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:16:22.478  [2024-12-13 19:00:54.282702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:22.737   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:22.737   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0
00:16:22.737   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:16:22.737   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable
00:16:22.737   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x
00:16:22.737   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:16:22.737   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT
00:16:22.737    19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31261
00:16:22.996  [2024-12-13 19:00:54.738768] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar
00:16:22.996   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/12/13 19:00:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode31261 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar
00:16:22.996  request:
00:16:22.996  {
00:16:22.996    "method": "nvmf_create_subsystem",
00:16:22.996    "params": {
00:16:22.996      "nqn": "nqn.2016-06.io.spdk:cnode31261",
00:16:22.996      "tgt_name": "foobar"
00:16:22.996    }
00:16:22.996  }
00:16:22.996  Got JSON-RPC error response
00:16:22.996  GoRPCClient: error on JSON-RPC call'
00:16:22.996   19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/12/13 19:00:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode31261 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar
00:16:22.996  request:
00:16:22.996  {
00:16:22.996    "method": "nvmf_create_subsystem",
00:16:22.996    "params": {
00:16:22.996      "nqn": "nqn.2016-06.io.spdk:cnode31261",
00:16:22.996      "tgt_name": "foobar"
00:16:22.996    }
00:16:22.996  }
00:16:22.996  Got JSON-RPC error response
00:16:22.996  GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]]
00:16:22.996     19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f'
00:16:22.996    19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29184
00:16:23.255  [2024-12-13 19:00:55.051029] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29184: invalid serial number 'SPDKISFASTANDAWESOME'
00:16:23.255   19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/12/13 19:00:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode29184 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME
00:16:23.255  request:
00:16:23.255  {
00:16:23.255    "method": "nvmf_create_subsystem",
00:16:23.255    "params": {
00:16:23.255      "nqn": "nqn.2016-06.io.spdk:cnode29184",
00:16:23.255      "serial_number": "SPDKISFASTANDAWESOME\u001f"
00:16:23.255    }
00:16:23.255  }
00:16:23.255  Got JSON-RPC error response
00:16:23.255  GoRPCClient: error on JSON-RPC call'
00:16:23.255   19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/12/13 19:00:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode29184 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME
00:16:23.255  request:
00:16:23.255  {
00:16:23.255    "method": "nvmf_create_subsystem",
00:16:23.255    "params": {
00:16:23.255      "nqn": "nqn.2016-06.io.spdk:cnode29184",
00:16:23.255      "serial_number": "SPDKISFASTANDAWESOME\u001f"
00:16:23.255    }
00:16:23.255  }
00:16:23.255  Got JSON-RPC error response
00:16:23.255  GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]]
00:16:23.513     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f'
00:16:23.513    19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13822
00:16:23.771  [2024-12-13 19:00:55.363286] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13822: invalid model number 'SPDK_Controller'
00:16:23.771   19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/12/13 19:00:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode13822], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller
00:16:23.771  request:
00:16:23.771  {
00:16:23.771    "method": "nvmf_create_subsystem",
00:16:23.771    "params": {
00:16:23.771      "nqn": "nqn.2016-06.io.spdk:cnode13822",
00:16:23.771      "model_number": "SPDK_Controller\u001f"
00:16:23.771    }
00:16:23.771  }
00:16:23.771  Got JSON-RPC error response
00:16:23.771  GoRPCClient: error on JSON-RPC call'
00:16:23.771   19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/12/13 19:00:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode13822], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller
00:16:23.771  request:
00:16:23.771  {
00:16:23.771    "method": "nvmf_create_subsystem",
00:16:23.771    "params": {
00:16:23.771      "nqn": "nqn.2016-06.io.spdk:cnode13822",
00:16:23.771      "model_number": "SPDK_Controller\u001f"
00:16:23.771    }
00:16:23.771  }
00:16:23.771  Got JSON-RPC error response
00:16:23.771  GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]]
00:16:23.771     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21
00:16:23.771     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll
00:16:23.771     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127')
00:16:23.771     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars
00:16:23.771     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string
00:16:23.771     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 ))
00:16:23.771     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:23.771       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112
00:16:23.771      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70'
00:16:23.771     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p
00:16:23.771     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:23.771     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:23.771       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105
00:16:23.771      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69'
00:16:23.771     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i
00:16:23.771     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:23.771     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:23.771       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77
00:16:23.771      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d'
00:16:23.771     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M
00:16:23.771     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:23.771     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:23.771       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73
00:16:23.771      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49'
00:16:23.771     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I
00:16:23.771     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:23.771     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:23.771       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83
00:16:23.772      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53'
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:23.772       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90
00:16:23.772      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a'
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:23.772       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89
00:16:23.772      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59'
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:23.772       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82
00:16:23.772      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52'
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:23.772       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58
00:16:23.772      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a'
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=:
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:23.772       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38
00:16:23.772      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26'
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&'
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:23.772       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86
00:16:23.772      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56'
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:23.772       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70
00:16:23.772      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46'
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:23.772       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74
00:16:23.772      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a'
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:23.772       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73
00:16:23.772      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49'
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:23.772       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48
00:16:23.772      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30'
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:23.772       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79
00:16:23.772      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f'
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:23.772       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105
00:16:23.772      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69'
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:23.772       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49
00:16:23.772      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31'
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:23.772       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70
00:16:23.772      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46'
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:23.772       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55
00:16:23.772      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37'
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:23.772       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53
00:16:23.772      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35'
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ p == \- ]]
00:16:23.772     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'piMISZYR:&VFJI0Oi1F75'
00:16:23.772    19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'piMISZYR:&VFJI0Oi1F75' nqn.2016-06.io.spdk:cnode12138
00:16:24.031  [2024-12-13 19:00:55.755561] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12138: invalid serial number 'piMISZYR:&VFJI0Oi1F75'
00:16:24.031   19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/12/13 19:00:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12138 serial_number:piMISZYR:&VFJI0Oi1F75], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN piMISZYR:&VFJI0Oi1F75
00:16:24.031  request:
00:16:24.031  {
00:16:24.031    "method": "nvmf_create_subsystem",
00:16:24.031    "params": {
00:16:24.031      "nqn": "nqn.2016-06.io.spdk:cnode12138",
00:16:24.031      "serial_number": "piMISZYR:&VFJI0Oi1F75"
00:16:24.031    }
00:16:24.031  }
00:16:24.031  Got JSON-RPC error response
00:16:24.031  GoRPCClient: error on JSON-RPC call'
00:16:24.031   19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/12/13 19:00:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12138 serial_number:piMISZYR:&VFJI0Oi1F75], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN piMISZYR:&VFJI0Oi1F75
00:16:24.031  request:
00:16:24.031  {
00:16:24.031    "method": "nvmf_create_subsystem",
00:16:24.031    "params": {
00:16:24.031      "nqn": "nqn.2016-06.io.spdk:cnode12138",
00:16:24.031      "serial_number": "piMISZYR:&VFJI0Oi1F75"
00:16:24.031    }
00:16:24.031  }
00:16:24.031  Got JSON-RPC error response
00:16:24.031  GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]]
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127')
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 ))
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.031       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78
00:16:24.031      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e'
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.031       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115
00:16:24.031      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73'
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.031       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53
00:16:24.031      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35'
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.031       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110
00:16:24.031      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e'
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.031       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68
00:16:24.031      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44'
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.031       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106
00:16:24.031      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a'
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.031       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33
00:16:24.031      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21'
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!'
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.031       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61
00:16:24.031      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d'
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+==
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.031       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59
00:16:24.031      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b'
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';'
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.031       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106
00:16:24.031      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a'
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.031       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124
00:16:24.031      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c'
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|'
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.031       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73
00:16:24.031      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49'
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.031       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48
00:16:24.031      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30'
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.031       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84
00:16:24.031      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54'
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.031     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.031       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108
00:16:24.290      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c'
00:16:24.290     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l
00:16:24.290     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.290     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.290       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108
00:16:24.290      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c'
00:16:24.290     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l
00:16:24.290     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.290     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.290       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52
00:16:24.290      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34'
00:16:24.290     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4
00:16:24.290     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.290     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.290       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112
00:16:24.290      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70'
00:16:24.290     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p
00:16:24.290     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.290     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.290       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35
00:16:24.290      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23'
00:16:24.290     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#'
00:16:24.290     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.290     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.290       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86
00:16:24.290      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56'
00:16:24.290     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V
00:16:24.290     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.290     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.290       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57
00:16:24.290      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39'
00:16:24.290     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.291       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99
00:16:24.291      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63'
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.291       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65
00:16:24.291      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41'
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.291       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42
00:16:24.291      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a'
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*'
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.291       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119
00:16:24.291      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77'
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.291       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122
00:16:24.291      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a'
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.291       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100
00:16:24.291      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64'
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.291       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48
00:16:24.291      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30'
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.291       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79
00:16:24.291      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f'
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.291       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65
00:16:24.291      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41'
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.291       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45
00:16:24.291      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d'
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=-
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.291       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86
00:16:24.291      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56'
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.291       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81
00:16:24.291      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51'
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.291       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94
00:16:24.291      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e'
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^'
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.291       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81
00:16:24.291      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51'
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.291       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65
00:16:24.291      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41'
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.291       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88
00:16:24.291      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58'
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.291       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91
00:16:24.291      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b'
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='['
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.291       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74
00:16:24.291      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a'
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.291       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48
00:16:24.291      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30'
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.291       19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36
00:16:24.291      19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24'
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$'
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ N == \- ]]
00:16:24.291     19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Ns5nDj!=;j|I0Tll4p#V9cA*wzd0OA-VQ^QAX[J0$'
00:16:24.291    19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Ns5nDj!=;j|I0Tll4p#V9cA*wzd0OA-VQ^QAX[J0$' nqn.2016-06.io.spdk:cnode13531
00:16:24.550  [2024-12-13 19:00:56.203937] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13531: invalid model number 'Ns5nDj!=;j|I0Tll4p#V9cA*wzd0OA-VQ^QAX[J0$'
00:16:24.550   19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/12/13 19:00:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:Ns5nDj!=;j|I0Tll4p#V9cA*wzd0OA-VQ^QAX[J0$ nqn:nqn.2016-06.io.spdk:cnode13531], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN Ns5nDj!=;j|I0Tll4p#V9cA*wzd0OA-VQ^QAX[J0$
00:16:24.550  request:
00:16:24.550  {
00:16:24.550    "method": "nvmf_create_subsystem",
00:16:24.550    "params": {
00:16:24.550      "nqn": "nqn.2016-06.io.spdk:cnode13531",
00:16:24.550      "model_number": "Ns5nDj!=;j|I0Tll4p#V9cA*wzd0OA-VQ^QAX[J0$"
00:16:24.550    }
00:16:24.550  }
00:16:24.550  Got JSON-RPC error response
00:16:24.550  GoRPCClient: error on JSON-RPC call'
00:16:24.550   19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/12/13 19:00:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:Ns5nDj!=;j|I0Tll4p#V9cA*wzd0OA-VQ^QAX[J0$ nqn:nqn.2016-06.io.spdk:cnode13531], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN Ns5nDj!=;j|I0Tll4p#V9cA*wzd0OA-VQ^QAX[J0$
00:16:24.550  request:
00:16:24.550  {
00:16:24.550    "method": "nvmf_create_subsystem",
00:16:24.550    "params": {
00:16:24.550      "nqn": "nqn.2016-06.io.spdk:cnode13531",
00:16:24.550      "model_number": "Ns5nDj!=;j|I0Tll4p#V9cA*wzd0OA-VQ^QAX[J0$"
00:16:24.550    }
00:16:24.550  }
00:16:24.550  Got JSON-RPC error response
00:16:24.550  GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]]
00:16:24.550   19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp
00:16:24.809  [2024-12-13 19:00:56.520283] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:16:24.809   19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a
00:16:25.068   19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]]
00:16:25.068    19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo ''
00:16:25.068    19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1
00:16:25.068   19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=
00:16:25.068    19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421
00:16:25.635  [2024-12-13 19:00:57.152803] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2
00:16:25.635   19:00:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/12/13 19:00:57 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters
00:16:25.635  request:
00:16:25.635  {
00:16:25.635    "method": "nvmf_subsystem_remove_listener",
00:16:25.635    "params": {
00:16:25.635      "nqn": "nqn.2016-06.io.spdk:cnode",
00:16:25.635      "listen_address": {
00:16:25.635        "trtype": "tcp",
00:16:25.635        "traddr": "",
00:16:25.635        "trsvcid": "4421"
00:16:25.635      }
00:16:25.635    }
00:16:25.635  }
00:16:25.635  Got JSON-RPC error response
00:16:25.635  GoRPCClient: error on JSON-RPC call'
00:16:25.635   19:00:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/12/13 19:00:57 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters
00:16:25.635  request:
00:16:25.635  {
00:16:25.635    "method": "nvmf_subsystem_remove_listener",
00:16:25.635    "params": {
00:16:25.635      "nqn": "nqn.2016-06.io.spdk:cnode",
00:16:25.635      "listen_address": {
00:16:25.635        "trtype": "tcp",
00:16:25.635        "traddr": "",
00:16:25.635        "trsvcid": "4421"
00:16:25.635      }
00:16:25.635    }
00:16:25.635  }
00:16:25.635  Got JSON-RPC error response
00:16:25.635  GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]]
00:16:25.635    19:00:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14401 -i 0
00:16:25.635  [2024-12-13 19:00:57.436990] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14401: invalid cntlid range [0-65519]
00:16:25.635   19:00:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/12/13 19:00:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode14401], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519]
00:16:25.635  request:
00:16:25.635  {
00:16:25.635    "method": "nvmf_create_subsystem",
00:16:25.635    "params": {
00:16:25.635      "nqn": "nqn.2016-06.io.spdk:cnode14401",
00:16:25.635      "min_cntlid": 0
00:16:25.635    }
00:16:25.635  }
00:16:25.635  Got JSON-RPC error response
00:16:25.635  GoRPCClient: error on JSON-RPC call'
00:16:25.894   19:00:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/12/13 19:00:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode14401], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519]
00:16:25.894  request:
00:16:25.894  {
00:16:25.894    "method": "nvmf_create_subsystem",
00:16:25.894    "params": {
00:16:25.894      "nqn": "nqn.2016-06.io.spdk:cnode14401",
00:16:25.894      "min_cntlid": 0
00:16:25.894    }
00:16:25.894  }
00:16:25.894  Got JSON-RPC error response
00:16:25.894  GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:16:25.894    19:00:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31659 -i 65520
00:16:25.894  [2024-12-13 19:00:57.681208] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31659: invalid cntlid range [65520-65519]
00:16:25.894   19:00:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/12/13 19:00:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode31659], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519]
00:16:25.894  request:
00:16:25.894  {
00:16:25.894    "method": "nvmf_create_subsystem",
00:16:25.894    "params": {
00:16:25.894      "nqn": "nqn.2016-06.io.spdk:cnode31659",
00:16:25.894      "min_cntlid": 65520
00:16:25.894    }
00:16:25.894  }
00:16:25.894  Got JSON-RPC error response
00:16:25.894  GoRPCClient: error on JSON-RPC call'
00:16:25.894   19:00:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/12/13 19:00:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode31659], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519]
00:16:25.894  request:
00:16:25.894  {
00:16:25.894    "method": "nvmf_create_subsystem",
00:16:25.894    "params": {
00:16:25.894      "nqn": "nqn.2016-06.io.spdk:cnode31659",
00:16:25.895      "min_cntlid": 65520
00:16:25.895    }
00:16:25.895  }
00:16:25.895  Got JSON-RPC error response
00:16:25.895  GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:16:25.895    19:00:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16848 -I 0
00:16:26.462  [2024-12-13 19:00:57.993420] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16848: invalid cntlid range [1-0]
00:16:26.462   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/12/13 19:00:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode16848], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0]
00:16:26.462  request:
00:16:26.462  {
00:16:26.462    "method": "nvmf_create_subsystem",
00:16:26.462    "params": {
00:16:26.462      "nqn": "nqn.2016-06.io.spdk:cnode16848",
00:16:26.462      "max_cntlid": 0
00:16:26.462    }
00:16:26.462  }
00:16:26.462  Got JSON-RPC error response
00:16:26.462  GoRPCClient: error on JSON-RPC call'
00:16:26.462   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/12/13 19:00:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode16848], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0]
00:16:26.462  request:
00:16:26.462  {
00:16:26.462    "method": "nvmf_create_subsystem",
00:16:26.462    "params": {
00:16:26.462      "nqn": "nqn.2016-06.io.spdk:cnode16848",
00:16:26.462      "max_cntlid": 0
00:16:26.462    }
00:16:26.462  }
00:16:26.462  Got JSON-RPC error response
00:16:26.462  GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:16:26.462    19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30323 -I 65520
00:16:26.462  [2024-12-13 19:00:58.225658] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30323: invalid cntlid range [1-65520]
00:16:26.462   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/12/13 19:00:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode30323], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520]
00:16:26.462  request:
00:16:26.462  {
00:16:26.462    "method": "nvmf_create_subsystem",
00:16:26.462    "params": {
00:16:26.462      "nqn": "nqn.2016-06.io.spdk:cnode30323",
00:16:26.462      "max_cntlid": 65520
00:16:26.462    }
00:16:26.462  }
00:16:26.462  Got JSON-RPC error response
00:16:26.462  GoRPCClient: error on JSON-RPC call'
00:16:26.462   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/12/13 19:00:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode30323], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520]
00:16:26.462  request:
00:16:26.462  {
00:16:26.462    "method": "nvmf_create_subsystem",
00:16:26.462    "params": {
00:16:26.462      "nqn": "nqn.2016-06.io.spdk:cnode30323",
00:16:26.462      "max_cntlid": 65520
00:16:26.462    }
00:16:26.462  }
00:16:26.462  Got JSON-RPC error response
00:16:26.462  GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:16:26.462    19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11862 -i 6 -I 5
00:16:26.721  [2024-12-13 19:00:58.517857] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11862: invalid cntlid range [6-5]
00:16:26.721   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/12/13 19:00:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode11862], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5]
00:16:26.721  request:
00:16:26.721  {
00:16:26.721    "method": "nvmf_create_subsystem",
00:16:26.721    "params": {
00:16:26.721      "nqn": "nqn.2016-06.io.spdk:cnode11862",
00:16:26.721      "min_cntlid": 6,
00:16:26.721      "max_cntlid": 5
00:16:26.721    }
00:16:26.721  }
00:16:26.721  Got JSON-RPC error response
00:16:26.721  GoRPCClient: error on JSON-RPC call'
00:16:26.721   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/12/13 19:00:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode11862], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5]
00:16:26.721  request:
00:16:26.721  {
00:16:26.721    "method": "nvmf_create_subsystem",
00:16:26.721    "params": {
00:16:26.721      "nqn": "nqn.2016-06.io.spdk:cnode11862",
00:16:26.721      "min_cntlid": 6,
00:16:26.721      "max_cntlid": 5
00:16:26.721    }
00:16:26.721  }
00:16:26.721  Got JSON-RPC error response
00:16:26.721  GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:16:26.721    19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar
00:16:26.980   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request:
00:16:26.980  {
00:16:26.980    "name": "foobar",
00:16:26.980    "method": "nvmf_delete_target",
00:16:26.981    "req_id": 1
00:16:26.981  }
00:16:26.981  Got JSON-RPC error response
00:16:26.981  response:
00:16:26.981  {
00:16:26.981    "code": -32602,
00:16:26.981    "message": "The specified target doesn'\''t exist, cannot delete it."
00:16:26.981  }'
00:16:26.981   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request:
00:16:26.981  {
00:16:26.981    "name": "foobar",
00:16:26.981    "method": "nvmf_delete_target",
00:16:26.981    "req_id": 1
00:16:26.981  }
00:16:26.981  Got JSON-RPC error response
00:16:26.981  response:
00:16:26.981  {
00:16:26.981    "code": -32602,
00:16:26.981    "message": "The specified target doesn't exist, cannot delete it."
00:16:26.981  } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]]
00:16:26.981   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT
00:16:26.981   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini
00:16:26.981   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup
00:16:26.981   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync
00:16:26.981   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:16:26.981   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e
00:16:26.981   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20}
00:16:26.981   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:16:26.981  rmmod nvme_tcp
00:16:26.981  rmmod nvme_fabrics
00:16:26.981  rmmod nvme_keyring
00:16:26.981   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:16:26.981   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e
00:16:26.981   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0
00:16:26.981   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 92284 ']'
00:16:26.981   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 92284
00:16:26.981   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 92284 ']'
00:16:26.981   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 92284
00:16:26.981    19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname
00:16:26.981   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:26.981    19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92284
00:16:26.981   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:16:26.981   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:16:26.981  killing process with pid 92284
00:16:26.981   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92284'
00:16:26.981   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 92284
00:16:26.981   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 92284
00:16:27.240   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:16:27.240   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:16:27.240   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:16:27.240   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr
00:16:27.240   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save
00:16:27.240   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:16:27.240   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore
00:16:27.240   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:16:27.240   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:16:27.240   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:16:27.240   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:16:27.240   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:16:27.240   19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:16:27.240   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:16:27.240   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:16:27.240   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:16:27.240   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:16:27.240   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:16:27.499   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:16:27.499   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:16:27.499   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:16:27.499   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:16:27.499   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@246 -- # remove_spdk_ns
00:16:27.499   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:27.499   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:16:27.499    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:27.499   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # return 0
00:16:27.499  
00:16:27.499  real	0m5.767s
00:16:27.499  user	0m22.209s
00:16:27.499  sys	0m1.353s
00:16:27.499  ************************************
00:16:27.499  END TEST nvmf_invalid
00:16:27.499  ************************************
00:16:27.499   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:27.499   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x
00:16:27.499   19:00:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp
00:16:27.499   19:00:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:16:27.499   19:00:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:27.499   19:00:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:16:27.499  ************************************
00:16:27.499  START TEST nvmf_connect_stress
00:16:27.499  ************************************
00:16:27.499   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp
00:16:27.499  * Looking for test storage...
00:16:27.499  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:16:27.499    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:16:27.499     19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version
00:16:27.499     19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-:
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-:
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<'
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 ))
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:27.759     19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1
00:16:27.759     19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1
00:16:27.759     19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:27.759     19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1
00:16:27.759     19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2
00:16:27.759     19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2
00:16:27.759     19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:27.759     19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:16:27.759  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:27.759  		--rc genhtml_branch_coverage=1
00:16:27.759  		--rc genhtml_function_coverage=1
00:16:27.759  		--rc genhtml_legend=1
00:16:27.759  		--rc geninfo_all_blocks=1
00:16:27.759  		--rc geninfo_unexecuted_blocks=1
00:16:27.759  		
00:16:27.759  		'
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:16:27.759  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:27.759  		--rc genhtml_branch_coverage=1
00:16:27.759  		--rc genhtml_function_coverage=1
00:16:27.759  		--rc genhtml_legend=1
00:16:27.759  		--rc geninfo_all_blocks=1
00:16:27.759  		--rc geninfo_unexecuted_blocks=1
00:16:27.759  		
00:16:27.759  		'
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:16:27.759  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:27.759  		--rc genhtml_branch_coverage=1
00:16:27.759  		--rc genhtml_function_coverage=1
00:16:27.759  		--rc genhtml_legend=1
00:16:27.759  		--rc geninfo_all_blocks=1
00:16:27.759  		--rc geninfo_unexecuted_blocks=1
00:16:27.759  		
00:16:27.759  		'
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:16:27.759  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:27.759  		--rc genhtml_branch_coverage=1
00:16:27.759  		--rc genhtml_function_coverage=1
00:16:27.759  		--rc genhtml_legend=1
00:16:27.759  		--rc geninfo_all_blocks=1
00:16:27.759  		--rc geninfo_unexecuted_blocks=1
00:16:27.759  		
00:16:27.759  		'
00:16:27.759   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:16:27.759     19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:16:27.759     19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:16:27.759    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:16:27.759     19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob
00:16:27.759     19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:16:27.759     19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:16:27.759     19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:16:27.759      19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:27.759      19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:27.759      19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:27.759      19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH
00:16:27.759      19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:27.760    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0
00:16:27.760    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:16:27.760    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:16:27.760    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:16:27.760    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:16:27.760    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:16:27.760    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:16:27.760  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:16:27.760    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:16:27.760    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:16:27.760    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:16:27.760    19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@460 -- # nvmf_veth_init
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:16:27.760  Cannot find device "nvmf_init_br"
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:16:27.760  Cannot find device "nvmf_init_br2"
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:16:27.760  Cannot find device "nvmf_tgt_br"
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # true
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:16:27.760  Cannot find device "nvmf_tgt_br2"
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # true
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:16:27.760  Cannot find device "nvmf_init_br"
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # true
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:16:27.760  Cannot find device "nvmf_init_br2"
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # true
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:16:27.760  Cannot find device "nvmf_tgt_br"
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # true
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:16:27.760  Cannot find device "nvmf_tgt_br2"
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # true
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:16:27.760  Cannot find device "nvmf_br"
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # true
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:16:27.760  Cannot find device "nvmf_init_if"
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # true
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:16:27.760  Cannot find device "nvmf_init_if2"
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # true
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:16:27.760  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # true
00:16:27.760   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:16:28.020  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # true
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:16:28.020  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:16:28.020  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms
00:16:28.020  
00:16:28.020  --- 10.0.0.3 ping statistics ---
00:16:28.020  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:28.020  rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:16:28.020  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:16:28.020  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms
00:16:28.020  
00:16:28.020  --- 10.0.0.4 ping statistics ---
00:16:28.020  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:28.020  rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:16:28.020  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:16:28.020  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms
00:16:28.020  
00:16:28.020  --- 10.0.0.1 ping statistics ---
00:16:28.020  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:28.020  rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:16:28.020  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:16:28.020  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms
00:16:28.020  
00:16:28.020  --- 10.0.0.2 ping statistics ---
00:16:28.020  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:28.020  rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@461 -- # return 0
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:16:28.020   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:16:28.279   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE
00:16:28.279   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:16:28.279   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable
00:16:28.279   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:28.279   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=92837
00:16:28.279   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 92837
00:16:28.279   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 92837 ']'
00:16:28.279   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:28.279   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100
00:16:28.279  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:28.279   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:28.279   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:16:28.279   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable
00:16:28.279   19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:28.279  [2024-12-13 19:00:59.923664] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:16:28.279  [2024-12-13 19:00:59.923768] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:16:28.279  [2024-12-13 19:01:00.073286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:16:28.538  [2024-12-13 19:01:00.108760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:16:28.538  [2024-12-13 19:01:00.108810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:16:28.538  [2024-12-13 19:01:00.108835] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:16:28.538  [2024-12-13 19:01:00.108842] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:16:28.538  [2024-12-13 19:01:00.108849] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:16:28.538  [2024-12-13 19:01:00.110030] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:16:28.538  [2024-12-13 19:01:00.110178] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:16:28.538  [2024-12-13 19:01:00.110181] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:16:29.106   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:29.106   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0
00:16:29.106   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:16:29.106   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable
00:16:29.107   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:29.107   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:16:29.107   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:16:29.107   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:29.107   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:29.365  [2024-12-13 19:01:00.937336] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:29.365  [2024-12-13 19:01:00.954699] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:29.365  NULL1
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=92886
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt
00:16:29.365    19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:16:29.365   19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:16:29.365   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:16:29.365   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:16:29.365   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:16:29.365   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:16:29.365   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:16:29.365   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:16:29.366   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:16:29.366   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:16:29.366   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:16:29.366   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:16:29.366   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:16:29.366   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:16:29.366   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:16:29.366   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:16:29.366   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:16:29.366   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:16:29.366   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:16:29.366   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:16:29.366   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:16:29.366   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:16:29.366   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:16:29.366   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:16:29.366   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:16:29.366   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:16:29.366   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:29.366   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:29.366   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:29.366   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:29.625   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:29.625   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:29.625   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:29.625   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:29.625   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:29.883   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:29.883   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:29.883   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:29.883   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:29.883   19:01:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:30.452   19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:30.452   19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:30.452   19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:30.452   19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:30.452   19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:30.726   19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:30.726   19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:30.726   19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:30.726   19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:30.726   19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:31.000   19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:31.000   19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:31.000   19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:31.000   19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:31.000   19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:31.258   19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:31.258   19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:31.258   19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:31.258   19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:31.258   19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:31.516   19:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:31.516   19:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:31.516   19:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:31.516   19:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:31.516   19:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:32.081   19:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:32.081   19:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:32.081   19:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:32.081   19:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:32.081   19:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:32.339   19:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:32.340   19:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:32.340   19:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:32.340   19:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:32.340   19:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:32.598   19:01:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:32.598   19:01:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:32.598   19:01:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:32.598   19:01:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:32.598   19:01:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:32.856   19:01:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:32.856   19:01:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:32.856   19:01:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:32.856   19:01:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:32.856   19:01:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:33.114   19:01:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:33.114   19:01:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:33.114   19:01:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:33.114   19:01:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:33.114   19:01:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:33.681   19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:33.681   19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:33.681   19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:33.681   19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:33.681   19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:33.939   19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:33.939   19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:33.939   19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:33.939   19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:33.939   19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:34.197   19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:34.197   19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:34.197   19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:34.197   19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:34.197   19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:34.455   19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:34.455   19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:34.455   19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:34.455   19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:34.455   19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:35.021   19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:35.021   19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:35.021   19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:35.021   19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:35.021   19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:35.279   19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:35.279   19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:35.279   19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:35.279   19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:35.279   19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:35.538   19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:35.538   19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:35.538   19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:35.538   19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:35.538   19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:35.795   19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:35.795   19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:35.795   19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:35.795   19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:35.795   19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:36.052   19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:36.052   19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:36.052   19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:36.052   19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:36.052   19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:36.616   19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:36.616   19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:36.616   19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:36.616   19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:36.616   19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:36.874   19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:36.874   19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:36.874   19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:36.874   19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:36.874   19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:37.132   19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:37.132   19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:37.132   19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:37.132   19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:37.132   19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:37.390   19:01:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:37.390   19:01:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:37.390   19:01:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:37.390   19:01:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:37.390   19:01:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:37.648   19:01:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:37.648   19:01:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:37.648   19:01:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:37.648   19:01:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:37.648   19:01:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:38.214   19:01:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:38.214   19:01:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:38.214   19:01:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:38.214   19:01:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:38.214   19:01:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:38.472   19:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:38.472   19:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:38.472   19:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:38.472   19:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:38.472   19:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:38.730   19:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:38.730   19:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:38.730   19:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:38.730   19:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:38.730   19:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:38.988   19:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:38.988   19:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:38.988   19:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:38.988   19:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:38.988   19:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:39.554   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:39.554   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:39.554   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:16:39.554   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:39.554   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:39.554  Testing NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 92886
00:16:39.812  /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (92886) - No such process
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 92886
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20}
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:16:39.812  rmmod nvme_tcp
00:16:39.812  rmmod nvme_fabrics
00:16:39.812  rmmod nvme_keyring
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 92837 ']'
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 92837
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 92837 ']'
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 92837
00:16:39.812    19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:39.812    19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92837
00:16:39.812  killing process with pid 92837
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92837'
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 92837
00:16:39.812   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 92837
00:16:40.070   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:16:40.071   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:16:40.071   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:16:40.071   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr
00:16:40.071   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save
00:16:40.071   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:16:40.071   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore
00:16:40.071   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:16:40.071   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:16:40.071   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:16:40.071   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:16:40.071   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:16:40.071   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:16:40.071   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:16:40.071   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:16:40.071   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:16:40.071   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:16:40.071   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:16:40.071   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:16:40.071   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:16:40.071   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:16:40.329   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:16:40.329   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # remove_spdk_ns
00:16:40.329   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:40.329   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:16:40.329    19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:40.329   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # return 0
00:16:40.329  ************************************
00:16:40.329  END TEST nvmf_connect_stress
00:16:40.329  ************************************
00:16:40.329  
00:16:40.329  real	0m12.708s
00:16:40.329  user	0m41.489s
00:16:40.329  sys	0m3.416s
00:16:40.329   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:40.329   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:16:40.329   19:01:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp
00:16:40.329   19:01:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:16:40.329   19:01:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:40.329   19:01:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:16:40.329  ************************************
00:16:40.329  START TEST nvmf_fused_ordering
00:16:40.329  ************************************
00:16:40.329   19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp
00:16:40.329  * Looking for test storage...
00:16:40.329  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:16:40.329    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:16:40.329     19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version
00:16:40.329     19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-:
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-:
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<'
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 ))
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:40.588     19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1
00:16:40.588     19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1
00:16:40.588     19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:40.588     19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1
00:16:40.588     19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2
00:16:40.588     19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2
00:16:40.588     19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:40.588     19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:16:40.588  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:40.588  		--rc genhtml_branch_coverage=1
00:16:40.588  		--rc genhtml_function_coverage=1
00:16:40.588  		--rc genhtml_legend=1
00:16:40.588  		--rc geninfo_all_blocks=1
00:16:40.588  		--rc geninfo_unexecuted_blocks=1
00:16:40.588  		
00:16:40.588  		'
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:16:40.588  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:40.588  		--rc genhtml_branch_coverage=1
00:16:40.588  		--rc genhtml_function_coverage=1
00:16:40.588  		--rc genhtml_legend=1
00:16:40.588  		--rc geninfo_all_blocks=1
00:16:40.588  		--rc geninfo_unexecuted_blocks=1
00:16:40.588  		
00:16:40.588  		'
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:16:40.588  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:40.588  		--rc genhtml_branch_coverage=1
00:16:40.588  		--rc genhtml_function_coverage=1
00:16:40.588  		--rc genhtml_legend=1
00:16:40.588  		--rc geninfo_all_blocks=1
00:16:40.588  		--rc geninfo_unexecuted_blocks=1
00:16:40.588  		
00:16:40.588  		'
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:16:40.588  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:40.588  		--rc genhtml_branch_coverage=1
00:16:40.588  		--rc genhtml_function_coverage=1
00:16:40.588  		--rc genhtml_legend=1
00:16:40.588  		--rc geninfo_all_blocks=1
00:16:40.588  		--rc geninfo_unexecuted_blocks=1
00:16:40.588  		
00:16:40.588  		'
00:16:40.588   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:16:40.588     19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:16:40.588     19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:16:40.588    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:16:40.589    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:16:40.589    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:16:40.589    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:16:40.589    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:16:40.589     19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob
00:16:40.589     19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:16:40.589     19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:16:40.589     19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:16:40.589      19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:40.589      19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:40.589      19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:40.589      19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH
00:16:40.589      19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:40.589    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0
00:16:40.589    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:16:40.589    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:16:40.589    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:16:40.589    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:16:40.589    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:16:40.589    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:16:40.589  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:16:40.589    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:16:40.589    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:16:40.589    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:16:40.589    19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@460 -- # nvmf_veth_init
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:16:40.589  Cannot find device "nvmf_init_br"
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:16:40.589  Cannot find device "nvmf_init_br2"
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:16:40.589  Cannot find device "nvmf_tgt_br"
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # true
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:16:40.589  Cannot find device "nvmf_tgt_br2"
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # true
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:16:40.589  Cannot find device "nvmf_init_br"
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:16:40.589  Cannot find device "nvmf_init_br2"
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:16:40.589  Cannot find device "nvmf_tgt_br"
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # true
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:16:40.589  Cannot find device "nvmf_tgt_br2"
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # true
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:16:40.589  Cannot find device "nvmf_br"
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # true
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:16:40.589  Cannot find device "nvmf_init_if"
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # true
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:16:40.589  Cannot find device "nvmf_init_if2"
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # true
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:16:40.589  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # true
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:16:40.589  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # true
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:16:40.589   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:16:40.590   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:16:40.590   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:16:40.590   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:16:40.848  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:16:40.848  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms
00:16:40.848  
00:16:40.848  --- 10.0.0.3 ping statistics ---
00:16:40.848  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:40.848  rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:16:40.848  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:16:40.848  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms
00:16:40.848  
00:16:40.848  --- 10.0.0.4 ping statistics ---
00:16:40.848  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:40.848  rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:16:40.848  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:16:40.848  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms
00:16:40.848  
00:16:40.848  --- 10.0.0.1 ping statistics ---
00:16:40.848  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:40.848  rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:16:40.848  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:16:40.848  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms
00:16:40.848  
00:16:40.848  --- 10.0.0.2 ping statistics ---
00:16:40.848  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:40.848  rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:16:40.848   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@461 -- # return 0
00:16:40.849   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:16:40.849   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:16:40.849   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:16:40.849   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:16:40.849   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:16:40.849   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:16:40.849   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:16:40.849   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2
00:16:40.849   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:16:40.849   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable
00:16:40.849   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:16:40.849   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=93271
00:16:40.849   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 93271
00:16:40.849   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:16:40.849   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 93271 ']'
00:16:40.849   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:40.849   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100
00:16:40.849  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:40.849   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:40.849   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable
00:16:40.849   19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:16:40.849  [2024-12-13 19:01:12.652238] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:16:40.849  [2024-12-13 19:01:12.652356] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:16:41.107  [2024-12-13 19:01:12.798156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:41.107  [2024-12-13 19:01:12.836493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:16:41.107  [2024-12-13 19:01:12.836557] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:16:41.107  [2024-12-13 19:01:12.836583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:16:41.107  [2024-12-13 19:01:12.836590] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:16:41.107  [2024-12-13 19:01:12.836597] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:16:41.107  [2024-12-13 19:01:12.836985] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:16:42.041  [2024-12-13 19:01:13.664401] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:16:42.041  [2024-12-13 19:01:13.680508] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:16:42.041  NULL1
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:42.041   19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1'
00:16:42.041  [2024-12-13 19:01:13.735618] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:16:42.041  [2024-12-13 19:01:13.735674] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93321 ]
00:16:42.300  Attached to nqn.2016-06.io.spdk:cnode1
00:16:42.300    Namespace ID: 1 size: 1GB
00:16:42.300  fused_ordering(0)
00:16:42.300  fused_ordering(1)
00:16:42.300  fused_ordering(2)
00:16:42.300  fused_ordering(3)
00:16:42.300  fused_ordering(4)
00:16:42.300  fused_ordering(5)
00:16:42.300  fused_ordering(6)
00:16:42.300  fused_ordering(7)
00:16:42.300  fused_ordering(8)
00:16:42.300  fused_ordering(9)
00:16:42.300  fused_ordering(10)
00:16:42.300  fused_ordering(11)
00:16:42.300  fused_ordering(12)
00:16:42.300  fused_ordering(13)
00:16:42.300  fused_ordering(14)
00:16:42.300  fused_ordering(15)
00:16:42.300  fused_ordering(16)
00:16:42.300  fused_ordering(17)
00:16:42.300  fused_ordering(18)
00:16:42.300  fused_ordering(19)
00:16:42.300  fused_ordering(20)
00:16:42.300  fused_ordering(21)
00:16:42.300  fused_ordering(22)
00:16:42.300  fused_ordering(23)
00:16:42.300  fused_ordering(24)
00:16:42.300  fused_ordering(25)
00:16:42.300  fused_ordering(26)
00:16:42.300  fused_ordering(27)
00:16:42.300  fused_ordering(28)
00:16:42.300  fused_ordering(29)
00:16:42.300  fused_ordering(30)
00:16:42.300  fused_ordering(31)
00:16:42.300  fused_ordering(32)
00:16:42.300  fused_ordering(33)
00:16:42.300  fused_ordering(34)
00:16:42.300  fused_ordering(35)
00:16:42.300  fused_ordering(36)
00:16:42.300  fused_ordering(37)
00:16:42.300  fused_ordering(38)
00:16:42.300  fused_ordering(39)
00:16:42.300  fused_ordering(40)
00:16:42.300  fused_ordering(41)
00:16:42.300  fused_ordering(42)
00:16:42.300  fused_ordering(43)
00:16:42.300  fused_ordering(44)
00:16:42.300  fused_ordering(45)
00:16:42.300  fused_ordering(46)
00:16:42.300  fused_ordering(47)
00:16:42.300  fused_ordering(48)
00:16:42.300  fused_ordering(49)
00:16:42.300  fused_ordering(50)
00:16:42.300  fused_ordering(51)
00:16:42.300  fused_ordering(52)
00:16:42.300  fused_ordering(53)
00:16:42.300  fused_ordering(54)
00:16:42.300  fused_ordering(55)
00:16:42.300  fused_ordering(56)
00:16:42.300  fused_ordering(57)
00:16:42.300  fused_ordering(58)
00:16:42.300  fused_ordering(59)
00:16:42.300  fused_ordering(60)
00:16:42.300  fused_ordering(61)
00:16:42.300  fused_ordering(62)
00:16:42.300  fused_ordering(63)
00:16:42.300  fused_ordering(64)
00:16:42.300  fused_ordering(65)
00:16:42.300  fused_ordering(66)
00:16:42.300  fused_ordering(67)
00:16:42.300  fused_ordering(68)
00:16:42.300  fused_ordering(69)
00:16:42.300  fused_ordering(70)
00:16:42.300  fused_ordering(71)
00:16:42.300  fused_ordering(72)
00:16:42.300  fused_ordering(73)
00:16:42.300  fused_ordering(74)
00:16:42.300  fused_ordering(75)
00:16:42.300  fused_ordering(76)
00:16:42.300  fused_ordering(77)
00:16:42.300  fused_ordering(78)
00:16:42.300  fused_ordering(79)
00:16:42.300  fused_ordering(80)
00:16:42.300  fused_ordering(81)
00:16:42.300  fused_ordering(82)
00:16:42.300  fused_ordering(83)
00:16:42.300  fused_ordering(84)
00:16:42.300  fused_ordering(85)
00:16:42.300  fused_ordering(86)
00:16:42.300  fused_ordering(87)
00:16:42.300  fused_ordering(88)
00:16:42.300  fused_ordering(89)
00:16:42.300  fused_ordering(90)
00:16:42.300  fused_ordering(91)
00:16:42.300  fused_ordering(92)
00:16:42.300  fused_ordering(93)
00:16:42.300  fused_ordering(94)
00:16:42.300  fused_ordering(95)
00:16:42.300  fused_ordering(96)
00:16:42.300  fused_ordering(97)
00:16:42.300  fused_ordering(98)
00:16:42.300  fused_ordering(99)
00:16:42.300  fused_ordering(100)
00:16:42.300  fused_ordering(101)
00:16:42.300  fused_ordering(102)
00:16:42.300  fused_ordering(103)
00:16:42.300  fused_ordering(104)
00:16:42.300  fused_ordering(105)
00:16:42.300  fused_ordering(106)
00:16:42.300  fused_ordering(107)
00:16:42.300  fused_ordering(108)
00:16:42.300  fused_ordering(109)
00:16:42.300  fused_ordering(110)
00:16:42.300  fused_ordering(111)
00:16:42.300  fused_ordering(112)
00:16:42.300  fused_ordering(113)
00:16:42.300  fused_ordering(114)
00:16:42.300  fused_ordering(115)
00:16:42.300  fused_ordering(116)
00:16:42.300  fused_ordering(117)
00:16:42.300  fused_ordering(118)
00:16:42.300  fused_ordering(119)
00:16:42.300  fused_ordering(120)
00:16:42.300  fused_ordering(121)
00:16:42.300  fused_ordering(122)
00:16:42.301  fused_ordering(123)
00:16:42.301  fused_ordering(124)
00:16:42.301  fused_ordering(125)
00:16:42.301  fused_ordering(126)
00:16:42.301  fused_ordering(127)
00:16:42.301  fused_ordering(128)
00:16:42.301  fused_ordering(129)
00:16:42.301  fused_ordering(130)
00:16:42.301  fused_ordering(131)
00:16:42.301  fused_ordering(132)
00:16:42.301  fused_ordering(133)
00:16:42.301  fused_ordering(134)
00:16:42.301  fused_ordering(135)
00:16:42.301  fused_ordering(136)
00:16:42.301  fused_ordering(137)
00:16:42.301  fused_ordering(138)
00:16:42.301  fused_ordering(139)
00:16:42.301  fused_ordering(140)
00:16:42.301  fused_ordering(141)
00:16:42.301  fused_ordering(142)
00:16:42.301  fused_ordering(143)
00:16:42.301  fused_ordering(144)
00:16:42.301  fused_ordering(145)
00:16:42.301  fused_ordering(146)
00:16:42.301  fused_ordering(147)
00:16:42.301  fused_ordering(148)
00:16:42.301  fused_ordering(149)
00:16:42.301  fused_ordering(150)
00:16:42.301  fused_ordering(151)
00:16:42.301  fused_ordering(152)
00:16:42.301  fused_ordering(153)
00:16:42.301  fused_ordering(154)
00:16:42.301  fused_ordering(155)
00:16:42.301  fused_ordering(156)
00:16:42.301  fused_ordering(157)
00:16:42.301  fused_ordering(158)
00:16:42.301  fused_ordering(159)
00:16:42.301  fused_ordering(160)
00:16:42.301  fused_ordering(161)
00:16:42.301  fused_ordering(162)
00:16:42.301  fused_ordering(163)
00:16:42.301  fused_ordering(164)
00:16:42.301  fused_ordering(165)
00:16:42.301  fused_ordering(166)
00:16:42.301  fused_ordering(167)
00:16:42.301  fused_ordering(168)
00:16:42.301  fused_ordering(169)
00:16:42.301  fused_ordering(170)
00:16:42.301  fused_ordering(171)
00:16:42.301  fused_ordering(172)
00:16:42.301  fused_ordering(173)
00:16:42.301  fused_ordering(174)
00:16:42.301  fused_ordering(175)
00:16:42.301  fused_ordering(176)
00:16:42.301  fused_ordering(177)
00:16:42.301  fused_ordering(178)
00:16:42.301  fused_ordering(179)
00:16:42.301  fused_ordering(180)
00:16:42.301  fused_ordering(181)
00:16:42.301  fused_ordering(182)
00:16:42.301  fused_ordering(183)
00:16:42.301  fused_ordering(184)
00:16:42.301  fused_ordering(185)
00:16:42.301  fused_ordering(186)
00:16:42.301  fused_ordering(187)
00:16:42.301  fused_ordering(188)
00:16:42.301  fused_ordering(189)
00:16:42.301  fused_ordering(190)
00:16:42.301  fused_ordering(191)
00:16:42.301  fused_ordering(192)
00:16:42.301  fused_ordering(193)
00:16:42.301  fused_ordering(194)
00:16:42.301  fused_ordering(195)
00:16:42.301  fused_ordering(196)
00:16:42.301  fused_ordering(197)
00:16:42.301  fused_ordering(198)
00:16:42.301  fused_ordering(199)
00:16:42.301  fused_ordering(200)
00:16:42.301  fused_ordering(201)
00:16:42.301  fused_ordering(202)
00:16:42.301  fused_ordering(203)
00:16:42.301  fused_ordering(204)
00:16:42.301  fused_ordering(205)
00:16:42.559  fused_ordering(206)
00:16:42.559  fused_ordering(207)
00:16:42.559  fused_ordering(208)
00:16:42.559  fused_ordering(209)
00:16:42.559  fused_ordering(210)
00:16:42.559  fused_ordering(211)
00:16:42.559  fused_ordering(212)
00:16:42.559  fused_ordering(213)
00:16:42.559  fused_ordering(214)
00:16:42.559  fused_ordering(215)
00:16:42.559  fused_ordering(216)
00:16:42.559  fused_ordering(217)
00:16:42.559  fused_ordering(218)
00:16:42.559  fused_ordering(219)
00:16:42.559  fused_ordering(220)
00:16:42.559  fused_ordering(221)
00:16:42.559  fused_ordering(222)
00:16:42.559  fused_ordering(223)
00:16:42.559  fused_ordering(224)
00:16:42.559  fused_ordering(225)
00:16:42.559  fused_ordering(226)
00:16:42.559  fused_ordering(227)
00:16:42.559  fused_ordering(228)
00:16:42.559  fused_ordering(229)
00:16:42.559  fused_ordering(230)
00:16:42.559  fused_ordering(231)
00:16:42.559  fused_ordering(232)
00:16:42.559  fused_ordering(233)
00:16:42.559  fused_ordering(234)
00:16:42.559  fused_ordering(235)
00:16:42.559  fused_ordering(236)
00:16:42.559  fused_ordering(237)
00:16:42.559  fused_ordering(238)
00:16:42.559  fused_ordering(239)
00:16:42.559  fused_ordering(240)
00:16:42.559  fused_ordering(241)
00:16:42.559  fused_ordering(242)
00:16:42.559  fused_ordering(243)
00:16:42.559  fused_ordering(244)
00:16:42.559  fused_ordering(245)
00:16:42.559  fused_ordering(246)
00:16:42.559  fused_ordering(247)
00:16:42.559  fused_ordering(248)
00:16:42.559  fused_ordering(249)
00:16:42.559  fused_ordering(250)
00:16:42.559  fused_ordering(251)
00:16:42.559  fused_ordering(252)
00:16:42.559  fused_ordering(253)
00:16:42.559  fused_ordering(254)
00:16:42.559  fused_ordering(255)
00:16:42.559  fused_ordering(256)
00:16:42.559  fused_ordering(257)
00:16:42.559  fused_ordering(258)
00:16:42.559  fused_ordering(259)
00:16:42.559  fused_ordering(260)
00:16:42.559  fused_ordering(261)
00:16:42.559  fused_ordering(262)
00:16:42.559  fused_ordering(263)
00:16:42.559  fused_ordering(264)
00:16:42.559  fused_ordering(265)
00:16:42.559  fused_ordering(266)
00:16:42.559  fused_ordering(267)
00:16:42.559  fused_ordering(268)
00:16:42.559  fused_ordering(269)
00:16:42.559  fused_ordering(270)
00:16:42.559  fused_ordering(271)
00:16:42.559  fused_ordering(272)
00:16:42.559  fused_ordering(273)
00:16:42.559  fused_ordering(274)
00:16:42.559  fused_ordering(275)
00:16:42.559  fused_ordering(276)
00:16:42.559  fused_ordering(277)
00:16:42.559  fused_ordering(278)
00:16:42.559  fused_ordering(279)
00:16:42.559  fused_ordering(280)
00:16:42.559  fused_ordering(281)
00:16:42.559  fused_ordering(282)
00:16:42.559  fused_ordering(283)
00:16:42.559  fused_ordering(284)
00:16:42.559  fused_ordering(285)
00:16:42.559  fused_ordering(286)
00:16:42.559  fused_ordering(287)
00:16:42.559  fused_ordering(288)
00:16:42.559  fused_ordering(289)
00:16:42.559  fused_ordering(290)
00:16:42.559  fused_ordering(291)
00:16:42.559  fused_ordering(292)
00:16:42.559  fused_ordering(293)
00:16:42.559  fused_ordering(294)
00:16:42.559  fused_ordering(295)
00:16:42.559  fused_ordering(296)
00:16:42.559  fused_ordering(297)
00:16:42.559  fused_ordering(298)
00:16:42.560  fused_ordering(299)
00:16:42.560  fused_ordering(300)
00:16:42.560  fused_ordering(301)
00:16:42.560  fused_ordering(302)
00:16:42.560  fused_ordering(303)
00:16:42.560  fused_ordering(304)
00:16:42.560  fused_ordering(305)
00:16:42.560  fused_ordering(306)
00:16:42.560  fused_ordering(307)
00:16:42.560  fused_ordering(308)
00:16:42.560  fused_ordering(309)
00:16:42.560  fused_ordering(310)
00:16:42.560  fused_ordering(311)
00:16:42.560  fused_ordering(312)
00:16:42.560  fused_ordering(313)
00:16:42.560  fused_ordering(314)
00:16:42.560  fused_ordering(315)
00:16:42.560  fused_ordering(316)
00:16:42.560  fused_ordering(317)
00:16:42.560  fused_ordering(318)
00:16:42.560  fused_ordering(319)
00:16:42.560  fused_ordering(320)
00:16:42.560  fused_ordering(321)
00:16:42.560  fused_ordering(322)
00:16:42.560  fused_ordering(323)
00:16:42.560  fused_ordering(324)
00:16:42.560  fused_ordering(325)
00:16:42.560  fused_ordering(326)
00:16:42.560  fused_ordering(327)
00:16:42.560  fused_ordering(328)
00:16:42.560  fused_ordering(329)
00:16:42.560  fused_ordering(330)
00:16:42.560  fused_ordering(331)
00:16:42.560  fused_ordering(332)
00:16:42.560  fused_ordering(333)
00:16:42.560  fused_ordering(334)
00:16:42.560  fused_ordering(335)
00:16:42.560  fused_ordering(336)
00:16:42.560  fused_ordering(337)
00:16:42.560  fused_ordering(338)
00:16:42.560  fused_ordering(339)
00:16:42.560  fused_ordering(340)
00:16:42.560  fused_ordering(341)
00:16:42.560  fused_ordering(342)
00:16:42.560  fused_ordering(343)
00:16:42.560  fused_ordering(344)
00:16:42.560  fused_ordering(345)
00:16:42.560  fused_ordering(346)
00:16:42.560  fused_ordering(347)
00:16:42.560  fused_ordering(348)
00:16:42.560  fused_ordering(349)
00:16:42.560  fused_ordering(350)
00:16:42.560  fused_ordering(351)
00:16:42.560  fused_ordering(352)
00:16:42.560  fused_ordering(353)
00:16:42.560  fused_ordering(354)
00:16:42.560  fused_ordering(355)
00:16:42.560  fused_ordering(356)
00:16:42.560  fused_ordering(357)
00:16:42.560  fused_ordering(358)
00:16:42.560  fused_ordering(359)
00:16:42.560  fused_ordering(360)
00:16:42.560  fused_ordering(361)
00:16:42.560  fused_ordering(362)
00:16:42.560  fused_ordering(363)
00:16:42.560  fused_ordering(364)
00:16:42.560  fused_ordering(365)
00:16:42.560  fused_ordering(366)
00:16:42.560  fused_ordering(367)
00:16:42.560  fused_ordering(368)
00:16:42.560  fused_ordering(369)
00:16:42.560  fused_ordering(370)
00:16:42.560  fused_ordering(371)
00:16:42.560  fused_ordering(372)
00:16:42.560  fused_ordering(373)
00:16:42.560  fused_ordering(374)
00:16:42.560  fused_ordering(375)
00:16:42.560  fused_ordering(376)
00:16:42.560  fused_ordering(377)
00:16:42.560  fused_ordering(378)
00:16:42.560  fused_ordering(379)
00:16:42.560  fused_ordering(380)
00:16:42.560  fused_ordering(381)
00:16:42.560  fused_ordering(382)
00:16:42.560  fused_ordering(383)
00:16:42.560  fused_ordering(384)
00:16:42.560  fused_ordering(385)
00:16:42.560  fused_ordering(386)
00:16:42.560  fused_ordering(387)
00:16:42.560  fused_ordering(388)
00:16:42.560  fused_ordering(389)
00:16:42.560  fused_ordering(390)
00:16:42.560  fused_ordering(391)
00:16:42.560  fused_ordering(392)
00:16:42.560  fused_ordering(393)
00:16:42.560  fused_ordering(394)
00:16:42.560  fused_ordering(395)
00:16:42.560  fused_ordering(396)
00:16:42.560  fused_ordering(397)
00:16:42.560  fused_ordering(398)
00:16:42.560  fused_ordering(399)
00:16:42.560  fused_ordering(400)
00:16:42.560  fused_ordering(401)
00:16:42.560  fused_ordering(402)
00:16:42.560  fused_ordering(403)
00:16:42.560  fused_ordering(404)
00:16:42.560  fused_ordering(405)
00:16:42.560  fused_ordering(406)
00:16:42.560  fused_ordering(407)
00:16:42.560  fused_ordering(408)
00:16:42.560  fused_ordering(409)
00:16:42.560  fused_ordering(410)
00:16:43.126  fused_ordering(411)
00:16:43.126  fused_ordering(412)
00:16:43.126  fused_ordering(413)
00:16:43.126  fused_ordering(414)
00:16:43.126  fused_ordering(415)
00:16:43.126  fused_ordering(416)
00:16:43.126  fused_ordering(417)
00:16:43.126  fused_ordering(418)
00:16:43.126  fused_ordering(419)
00:16:43.126  fused_ordering(420)
00:16:43.126  fused_ordering(421)
00:16:43.126  fused_ordering(422)
00:16:43.126  fused_ordering(423)
00:16:43.126  fused_ordering(424)
00:16:43.126  fused_ordering(425)
00:16:43.126  fused_ordering(426)
00:16:43.126  fused_ordering(427)
00:16:43.126  fused_ordering(428)
00:16:43.126  fused_ordering(429)
00:16:43.126  fused_ordering(430)
00:16:43.126  fused_ordering(431)
00:16:43.126  fused_ordering(432)
00:16:43.126  fused_ordering(433)
00:16:43.126  fused_ordering(434)
00:16:43.126  fused_ordering(435)
00:16:43.126  fused_ordering(436)
00:16:43.126  fused_ordering(437)
00:16:43.126  fused_ordering(438)
00:16:43.126  fused_ordering(439)
00:16:43.126  fused_ordering(440)
00:16:43.126  fused_ordering(441)
00:16:43.126  fused_ordering(442)
00:16:43.126  fused_ordering(443)
00:16:43.126  fused_ordering(444)
00:16:43.126  fused_ordering(445)
00:16:43.126  fused_ordering(446)
00:16:43.126  fused_ordering(447)
00:16:43.126  fused_ordering(448)
00:16:43.126  fused_ordering(449)
00:16:43.126  fused_ordering(450)
00:16:43.126  fused_ordering(451)
00:16:43.126  fused_ordering(452)
00:16:43.126  fused_ordering(453)
00:16:43.126  fused_ordering(454)
00:16:43.126  fused_ordering(455)
00:16:43.126  fused_ordering(456)
00:16:43.126  fused_ordering(457)
00:16:43.126  fused_ordering(458)
00:16:43.126  fused_ordering(459)
00:16:43.126  fused_ordering(460)
00:16:43.126  fused_ordering(461)
00:16:43.126  fused_ordering(462)
00:16:43.126  fused_ordering(463)
00:16:43.126  fused_ordering(464)
00:16:43.126  fused_ordering(465)
00:16:43.126  fused_ordering(466)
00:16:43.126  fused_ordering(467)
00:16:43.126  fused_ordering(468)
00:16:43.126  fused_ordering(469)
00:16:43.126  fused_ordering(470)
00:16:43.126  fused_ordering(471)
00:16:43.126  fused_ordering(472)
00:16:43.126  fused_ordering(473)
00:16:43.126  fused_ordering(474)
00:16:43.126  fused_ordering(475)
00:16:43.126  fused_ordering(476)
00:16:43.126  fused_ordering(477)
00:16:43.126  fused_ordering(478)
00:16:43.126  fused_ordering(479)
00:16:43.126  fused_ordering(480)
00:16:43.126  fused_ordering(481)
00:16:43.126  fused_ordering(482)
00:16:43.126  fused_ordering(483)
00:16:43.126  fused_ordering(484)
00:16:43.126  fused_ordering(485)
00:16:43.126  fused_ordering(486)
00:16:43.126  fused_ordering(487)
00:16:43.126  fused_ordering(488)
00:16:43.126  fused_ordering(489)
00:16:43.126  fused_ordering(490)
00:16:43.126  fused_ordering(491)
00:16:43.126  fused_ordering(492)
00:16:43.126  fused_ordering(493)
00:16:43.126  fused_ordering(494)
00:16:43.126  fused_ordering(495)
00:16:43.126  fused_ordering(496)
00:16:43.126  fused_ordering(497)
00:16:43.126  fused_ordering(498)
00:16:43.126  fused_ordering(499)
00:16:43.126  fused_ordering(500)
00:16:43.126  fused_ordering(501)
00:16:43.126  fused_ordering(502)
00:16:43.126  fused_ordering(503)
00:16:43.126  fused_ordering(504)
00:16:43.126  fused_ordering(505)
00:16:43.126  fused_ordering(506)
00:16:43.126  fused_ordering(507)
00:16:43.126  fused_ordering(508)
00:16:43.126  fused_ordering(509)
00:16:43.126  fused_ordering(510)
00:16:43.126  fused_ordering(511)
00:16:43.126  fused_ordering(512)
00:16:43.126  fused_ordering(513)
00:16:43.127  fused_ordering(514)
00:16:43.127  fused_ordering(515)
00:16:43.127  fused_ordering(516)
00:16:43.127  fused_ordering(517)
00:16:43.127  fused_ordering(518)
00:16:43.127  fused_ordering(519)
00:16:43.127  fused_ordering(520)
00:16:43.127  fused_ordering(521)
00:16:43.127  fused_ordering(522)
00:16:43.127  fused_ordering(523)
00:16:43.127  fused_ordering(524)
00:16:43.127  fused_ordering(525)
00:16:43.127  fused_ordering(526)
00:16:43.127  fused_ordering(527)
00:16:43.127  fused_ordering(528)
00:16:43.127  fused_ordering(529)
00:16:43.127  fused_ordering(530)
00:16:43.127  fused_ordering(531)
00:16:43.127  fused_ordering(532)
00:16:43.127  fused_ordering(533)
00:16:43.127  fused_ordering(534)
00:16:43.127  fused_ordering(535)
00:16:43.127  fused_ordering(536)
00:16:43.127  fused_ordering(537)
00:16:43.127  fused_ordering(538)
00:16:43.127  fused_ordering(539)
00:16:43.127  fused_ordering(540)
00:16:43.127  fused_ordering(541)
00:16:43.127  fused_ordering(542)
00:16:43.127  fused_ordering(543)
00:16:43.127  fused_ordering(544)
00:16:43.127  fused_ordering(545)
00:16:43.127  fused_ordering(546)
00:16:43.127  fused_ordering(547)
00:16:43.127  fused_ordering(548)
00:16:43.127  fused_ordering(549)
00:16:43.127  fused_ordering(550)
00:16:43.127  fused_ordering(551)
00:16:43.127  fused_ordering(552)
00:16:43.127  fused_ordering(553)
00:16:43.127  fused_ordering(554)
00:16:43.127  fused_ordering(555)
00:16:43.127  fused_ordering(556)
00:16:43.127  fused_ordering(557)
00:16:43.127  fused_ordering(558)
00:16:43.127  fused_ordering(559)
00:16:43.127  fused_ordering(560)
00:16:43.127  fused_ordering(561)
00:16:43.127  fused_ordering(562)
00:16:43.127  fused_ordering(563)
00:16:43.127  fused_ordering(564)
00:16:43.127  fused_ordering(565)
00:16:43.127  fused_ordering(566)
00:16:43.127  fused_ordering(567)
00:16:43.127  fused_ordering(568)
00:16:43.127  fused_ordering(569)
00:16:43.127  fused_ordering(570)
00:16:43.127  fused_ordering(571)
00:16:43.127  fused_ordering(572)
00:16:43.127  fused_ordering(573)
00:16:43.127  fused_ordering(574)
00:16:43.127  fused_ordering(575)
00:16:43.127  fused_ordering(576)
00:16:43.127  fused_ordering(577)
00:16:43.127  fused_ordering(578)
00:16:43.127  fused_ordering(579)
00:16:43.127  fused_ordering(580)
00:16:43.127  fused_ordering(581)
00:16:43.127  fused_ordering(582)
00:16:43.127  fused_ordering(583)
00:16:43.127  fused_ordering(584)
00:16:43.127  fused_ordering(585)
00:16:43.127  fused_ordering(586)
00:16:43.127  fused_ordering(587)
00:16:43.127  fused_ordering(588)
00:16:43.127  fused_ordering(589)
00:16:43.127  fused_ordering(590)
00:16:43.127  fused_ordering(591)
00:16:43.127  fused_ordering(592)
00:16:43.127  fused_ordering(593)
00:16:43.127  fused_ordering(594)
00:16:43.127  fused_ordering(595)
00:16:43.127  fused_ordering(596)
00:16:43.127  fused_ordering(597)
00:16:43.127  fused_ordering(598)
00:16:43.127  fused_ordering(599)
00:16:43.127  fused_ordering(600)
00:16:43.127  fused_ordering(601)
00:16:43.127  fused_ordering(602)
00:16:43.127  fused_ordering(603)
00:16:43.127  fused_ordering(604)
00:16:43.127  fused_ordering(605)
00:16:43.127  fused_ordering(606)
00:16:43.127  fused_ordering(607)
00:16:43.127  fused_ordering(608)
00:16:43.127  fused_ordering(609)
00:16:43.127  fused_ordering(610)
00:16:43.127  fused_ordering(611)
00:16:43.127  fused_ordering(612)
00:16:43.127  fused_ordering(613)
00:16:43.127  fused_ordering(614)
00:16:43.127  fused_ordering(615)
00:16:43.385  fused_ordering(616)
00:16:43.385  fused_ordering(617)
00:16:43.385  fused_ordering(618)
00:16:43.385  fused_ordering(619)
00:16:43.385  fused_ordering(620)
00:16:43.385  fused_ordering(621)
00:16:43.385  fused_ordering(622)
00:16:43.385  fused_ordering(623)
00:16:43.385  fused_ordering(624)
00:16:43.385  fused_ordering(625)
00:16:43.385  fused_ordering(626)
00:16:43.385  fused_ordering(627)
00:16:43.385  fused_ordering(628)
00:16:43.385  fused_ordering(629)
00:16:43.385  fused_ordering(630)
00:16:43.385  fused_ordering(631)
00:16:43.385  fused_ordering(632)
00:16:43.385  fused_ordering(633)
00:16:43.385  fused_ordering(634)
00:16:43.385  fused_ordering(635)
00:16:43.385  fused_ordering(636)
00:16:43.385  fused_ordering(637)
00:16:43.385  fused_ordering(638)
00:16:43.385  fused_ordering(639)
00:16:43.385  fused_ordering(640)
00:16:43.385  fused_ordering(641)
00:16:43.385  fused_ordering(642)
00:16:43.385  fused_ordering(643)
00:16:43.385  fused_ordering(644)
00:16:43.385  fused_ordering(645)
00:16:43.385  fused_ordering(646)
00:16:43.385  fused_ordering(647)
00:16:43.385  fused_ordering(648)
00:16:43.385  fused_ordering(649)
00:16:43.385  fused_ordering(650)
00:16:43.385  fused_ordering(651)
00:16:43.385  fused_ordering(652)
00:16:43.385  fused_ordering(653)
00:16:43.385  fused_ordering(654)
00:16:43.385  fused_ordering(655)
00:16:43.385  fused_ordering(656)
00:16:43.385  fused_ordering(657)
00:16:43.385  fused_ordering(658)
00:16:43.385  fused_ordering(659)
00:16:43.385  fused_ordering(660)
00:16:43.385  fused_ordering(661)
00:16:43.385  fused_ordering(662)
00:16:43.385  fused_ordering(663)
00:16:43.385  fused_ordering(664)
00:16:43.385  fused_ordering(665)
00:16:43.385  fused_ordering(666)
00:16:43.385  fused_ordering(667)
00:16:43.385  fused_ordering(668)
00:16:43.385  fused_ordering(669)
00:16:43.385  fused_ordering(670)
00:16:43.385  fused_ordering(671)
00:16:43.385  fused_ordering(672)
00:16:43.385  fused_ordering(673)
00:16:43.385  fused_ordering(674)
00:16:43.385  fused_ordering(675)
00:16:43.385  fused_ordering(676)
00:16:43.385  fused_ordering(677)
00:16:43.385  fused_ordering(678)
00:16:43.385  fused_ordering(679)
00:16:43.385  fused_ordering(680)
00:16:43.385  fused_ordering(681)
00:16:43.385  fused_ordering(682)
00:16:43.385  fused_ordering(683)
00:16:43.385  fused_ordering(684)
00:16:43.385  fused_ordering(685)
00:16:43.385  fused_ordering(686)
00:16:43.385  fused_ordering(687)
00:16:43.385  fused_ordering(688)
00:16:43.385  fused_ordering(689)
00:16:43.385  fused_ordering(690)
00:16:43.385  fused_ordering(691)
00:16:43.385  fused_ordering(692)
00:16:43.385  fused_ordering(693)
00:16:43.385  fused_ordering(694)
00:16:43.385  fused_ordering(695)
00:16:43.385  fused_ordering(696)
00:16:43.385  fused_ordering(697)
00:16:43.385  fused_ordering(698)
00:16:43.385  fused_ordering(699)
00:16:43.385  fused_ordering(700)
00:16:43.385  fused_ordering(701)
00:16:43.385  fused_ordering(702)
00:16:43.385  fused_ordering(703)
00:16:43.385  fused_ordering(704)
00:16:43.385  fused_ordering(705)
00:16:43.385  fused_ordering(706)
00:16:43.385  fused_ordering(707)
00:16:43.385  fused_ordering(708)
00:16:43.385  fused_ordering(709)
00:16:43.385  fused_ordering(710)
00:16:43.385  fused_ordering(711)
00:16:43.385  fused_ordering(712)
00:16:43.385  fused_ordering(713)
00:16:43.385  fused_ordering(714)
00:16:43.385  fused_ordering(715)
00:16:43.385  fused_ordering(716)
00:16:43.385  fused_ordering(717)
00:16:43.385  fused_ordering(718)
00:16:43.385  fused_ordering(719)
00:16:43.385  fused_ordering(720)
00:16:43.385  fused_ordering(721)
00:16:43.385  fused_ordering(722)
00:16:43.385  fused_ordering(723)
00:16:43.385  fused_ordering(724)
00:16:43.385  fused_ordering(725)
00:16:43.385  fused_ordering(726)
00:16:43.385  fused_ordering(727)
00:16:43.385  fused_ordering(728)
00:16:43.385  fused_ordering(729)
00:16:43.385  fused_ordering(730)
00:16:43.385  fused_ordering(731)
00:16:43.385  fused_ordering(732)
00:16:43.385  fused_ordering(733)
00:16:43.385  fused_ordering(734)
00:16:43.385  fused_ordering(735)
00:16:43.385  fused_ordering(736)
00:16:43.385  fused_ordering(737)
00:16:43.386  fused_ordering(738)
00:16:43.386  fused_ordering(739)
00:16:43.386  fused_ordering(740)
00:16:43.386  fused_ordering(741)
00:16:43.386  fused_ordering(742)
00:16:43.386  fused_ordering(743)
00:16:43.386  fused_ordering(744)
00:16:43.386  fused_ordering(745)
00:16:43.386  fused_ordering(746)
00:16:43.386  fused_ordering(747)
00:16:43.386  fused_ordering(748)
00:16:43.386  fused_ordering(749)
00:16:43.386  fused_ordering(750)
00:16:43.386  fused_ordering(751)
00:16:43.386  fused_ordering(752)
00:16:43.386  fused_ordering(753)
00:16:43.386  fused_ordering(754)
00:16:43.386  fused_ordering(755)
00:16:43.386  fused_ordering(756)
00:16:43.386  fused_ordering(757)
00:16:43.386  fused_ordering(758)
00:16:43.386  fused_ordering(759)
00:16:43.386  fused_ordering(760)
00:16:43.386  fused_ordering(761)
00:16:43.386  fused_ordering(762)
00:16:43.386  fused_ordering(763)
00:16:43.386  fused_ordering(764)
00:16:43.386  fused_ordering(765)
00:16:43.386  fused_ordering(766)
00:16:43.386  fused_ordering(767)
00:16:43.386  fused_ordering(768)
00:16:43.386  fused_ordering(769)
00:16:43.386  fused_ordering(770)
00:16:43.386  fused_ordering(771)
00:16:43.386  fused_ordering(772)
00:16:43.386  fused_ordering(773)
00:16:43.386  fused_ordering(774)
00:16:43.386  fused_ordering(775)
00:16:43.386  fused_ordering(776)
00:16:43.386  fused_ordering(777)
00:16:43.386  fused_ordering(778)
00:16:43.386  fused_ordering(779)
00:16:43.386  fused_ordering(780)
00:16:43.386  fused_ordering(781)
00:16:43.386  fused_ordering(782)
00:16:43.386  fused_ordering(783)
00:16:43.386  fused_ordering(784)
00:16:43.386  fused_ordering(785)
00:16:43.386  fused_ordering(786)
00:16:43.386  fused_ordering(787)
00:16:43.386  fused_ordering(788)
00:16:43.386  fused_ordering(789)
00:16:43.386  fused_ordering(790)
00:16:43.386  fused_ordering(791)
00:16:43.386  fused_ordering(792)
00:16:43.386  fused_ordering(793)
00:16:43.386  fused_ordering(794)
00:16:43.386  fused_ordering(795)
00:16:43.386  fused_ordering(796)
00:16:43.386  fused_ordering(797)
00:16:43.386  fused_ordering(798)
00:16:43.386  fused_ordering(799)
00:16:43.386  fused_ordering(800)
00:16:43.386  fused_ordering(801)
00:16:43.386  fused_ordering(802)
00:16:43.386  fused_ordering(803)
00:16:43.386  fused_ordering(804)
00:16:43.386  fused_ordering(805)
00:16:43.386  fused_ordering(806)
00:16:43.386  fused_ordering(807)
00:16:43.386  fused_ordering(808)
00:16:43.386  fused_ordering(809)
00:16:43.386  fused_ordering(810)
00:16:43.386  fused_ordering(811)
00:16:43.386  fused_ordering(812)
00:16:43.386  fused_ordering(813)
00:16:43.386  fused_ordering(814)
00:16:43.386  fused_ordering(815)
00:16:43.386  fused_ordering(816)
00:16:43.386  fused_ordering(817)
00:16:43.386  fused_ordering(818)
00:16:43.386  fused_ordering(819)
00:16:43.386  fused_ordering(820)
00:16:43.951  fused_ordering(821)
00:16:43.952  fused_ordering(822)
00:16:43.952  fused_ordering(823)
00:16:43.952  fused_ordering(824)
00:16:43.952  fused_ordering(825)
00:16:43.952  fused_ordering(826)
00:16:43.952  fused_ordering(827)
00:16:43.952  fused_ordering(828)
00:16:43.952  fused_ordering(829)
00:16:43.952  fused_ordering(830)
00:16:43.952  fused_ordering(831)
00:16:43.952  fused_ordering(832)
00:16:43.952  fused_ordering(833)
00:16:43.952  fused_ordering(834)
00:16:43.952  fused_ordering(835)
00:16:43.952  fused_ordering(836)
00:16:43.952  fused_ordering(837)
00:16:43.952  fused_ordering(838)
00:16:43.952  fused_ordering(839)
00:16:43.952  fused_ordering(840)
00:16:43.952  fused_ordering(841)
00:16:43.952  fused_ordering(842)
00:16:43.952  fused_ordering(843)
00:16:43.952  fused_ordering(844)
00:16:43.952  fused_ordering(845)
00:16:43.952  fused_ordering(846)
00:16:43.952  fused_ordering(847)
00:16:43.952  fused_ordering(848)
00:16:43.952  fused_ordering(849)
00:16:43.952  fused_ordering(850)
00:16:43.952  fused_ordering(851)
00:16:43.952  fused_ordering(852)
00:16:43.952  fused_ordering(853)
00:16:43.952  fused_ordering(854)
00:16:43.952  fused_ordering(855)
00:16:43.952  fused_ordering(856)
00:16:43.952  fused_ordering(857)
00:16:43.952  fused_ordering(858)
00:16:43.952  fused_ordering(859)
00:16:43.952  fused_ordering(860)
00:16:43.952  fused_ordering(861)
00:16:43.952  fused_ordering(862)
00:16:43.952  fused_ordering(863)
00:16:43.952  fused_ordering(864)
00:16:43.952  fused_ordering(865)
00:16:43.952  fused_ordering(866)
00:16:43.952  fused_ordering(867)
00:16:43.952  fused_ordering(868)
00:16:43.952  fused_ordering(869)
00:16:43.952  fused_ordering(870)
00:16:43.952  fused_ordering(871)
00:16:43.952  fused_ordering(872)
00:16:43.952  fused_ordering(873)
00:16:43.952  fused_ordering(874)
00:16:43.952  fused_ordering(875)
00:16:43.952  fused_ordering(876)
00:16:43.952  fused_ordering(877)
00:16:43.952  fused_ordering(878)
00:16:43.952  fused_ordering(879)
00:16:43.952  fused_ordering(880)
00:16:43.952  fused_ordering(881)
00:16:43.952  fused_ordering(882)
00:16:43.952  fused_ordering(883)
00:16:43.952  fused_ordering(884)
00:16:43.952  fused_ordering(885)
00:16:43.952  fused_ordering(886)
00:16:43.952  fused_ordering(887)
00:16:43.952  fused_ordering(888)
00:16:43.952  fused_ordering(889)
00:16:43.952  fused_ordering(890)
00:16:43.952  fused_ordering(891)
00:16:43.952  fused_ordering(892)
00:16:43.952  fused_ordering(893)
00:16:43.952  fused_ordering(894)
00:16:43.952  fused_ordering(895)
00:16:43.952  fused_ordering(896)
00:16:43.952  fused_ordering(897)
00:16:43.952  fused_ordering(898)
00:16:43.952  fused_ordering(899)
00:16:43.952  fused_ordering(900)
00:16:43.952  fused_ordering(901)
00:16:43.952  fused_ordering(902)
00:16:43.952  fused_ordering(903)
00:16:43.952  fused_ordering(904)
00:16:43.952  fused_ordering(905)
00:16:43.952  fused_ordering(906)
00:16:43.952  fused_ordering(907)
00:16:43.952  fused_ordering(908)
00:16:43.952  fused_ordering(909)
00:16:43.952  fused_ordering(910)
00:16:43.952  fused_ordering(911)
00:16:43.952  fused_ordering(912)
00:16:43.952  fused_ordering(913)
00:16:43.952  fused_ordering(914)
00:16:43.952  fused_ordering(915)
00:16:43.952  fused_ordering(916)
00:16:43.952  fused_ordering(917)
00:16:43.952  fused_ordering(918)
00:16:43.952  fused_ordering(919)
00:16:43.952  fused_ordering(920)
00:16:43.952  fused_ordering(921)
00:16:43.952  fused_ordering(922)
00:16:43.952  fused_ordering(923)
00:16:43.952  fused_ordering(924)
00:16:43.952  fused_ordering(925)
00:16:43.952  fused_ordering(926)
00:16:43.952  fused_ordering(927)
00:16:43.952  fused_ordering(928)
00:16:43.952  fused_ordering(929)
00:16:43.952  fused_ordering(930)
00:16:43.952  fused_ordering(931)
00:16:43.952  fused_ordering(932)
00:16:43.952  fused_ordering(933)
00:16:43.952  fused_ordering(934)
00:16:43.952  fused_ordering(935)
00:16:43.952  fused_ordering(936)
00:16:43.952  fused_ordering(937)
00:16:43.952  fused_ordering(938)
00:16:43.952  fused_ordering(939)
00:16:43.952  fused_ordering(940)
00:16:43.952  fused_ordering(941)
00:16:43.952  fused_ordering(942)
00:16:43.952  fused_ordering(943)
00:16:43.952  fused_ordering(944)
00:16:43.952  fused_ordering(945)
00:16:43.952  fused_ordering(946)
00:16:43.952  fused_ordering(947)
00:16:43.952  fused_ordering(948)
00:16:43.952  fused_ordering(949)
00:16:43.952  fused_ordering(950)
00:16:43.952  fused_ordering(951)
00:16:43.952  fused_ordering(952)
00:16:43.952  fused_ordering(953)
00:16:43.952  fused_ordering(954)
00:16:43.952  fused_ordering(955)
00:16:43.952  fused_ordering(956)
00:16:43.952  fused_ordering(957)
00:16:43.952  fused_ordering(958)
00:16:43.952  fused_ordering(959)
00:16:43.952  fused_ordering(960)
00:16:43.952  fused_ordering(961)
00:16:43.952  fused_ordering(962)
00:16:43.952  fused_ordering(963)
00:16:43.952  fused_ordering(964)
00:16:43.952  fused_ordering(965)
00:16:43.952  fused_ordering(966)
00:16:43.952  fused_ordering(967)
00:16:43.952  fused_ordering(968)
00:16:43.952  fused_ordering(969)
00:16:43.952  fused_ordering(970)
00:16:43.952  fused_ordering(971)
00:16:43.952  fused_ordering(972)
00:16:43.952  fused_ordering(973)
00:16:43.952  fused_ordering(974)
00:16:43.952  fused_ordering(975)
00:16:43.952  fused_ordering(976)
00:16:43.952  fused_ordering(977)
00:16:43.952  fused_ordering(978)
00:16:43.952  fused_ordering(979)
00:16:43.952  fused_ordering(980)
00:16:43.952  fused_ordering(981)
00:16:43.952  fused_ordering(982)
00:16:43.952  fused_ordering(983)
00:16:43.952  fused_ordering(984)
00:16:43.952  fused_ordering(985)
00:16:43.952  fused_ordering(986)
00:16:43.952  fused_ordering(987)
00:16:43.952  fused_ordering(988)
00:16:43.952  fused_ordering(989)
00:16:43.952  fused_ordering(990)
00:16:43.952  fused_ordering(991)
00:16:43.952  fused_ordering(992)
00:16:43.952  fused_ordering(993)
00:16:43.952  fused_ordering(994)
00:16:43.952  fused_ordering(995)
00:16:43.952  fused_ordering(996)
00:16:43.952  fused_ordering(997)
00:16:43.952  fused_ordering(998)
00:16:43.952  fused_ordering(999)
00:16:43.952  fused_ordering(1000)
00:16:43.952  fused_ordering(1001)
00:16:43.952  fused_ordering(1002)
00:16:43.952  fused_ordering(1003)
00:16:43.952  fused_ordering(1004)
00:16:43.952  fused_ordering(1005)
00:16:43.952  fused_ordering(1006)
00:16:43.952  fused_ordering(1007)
00:16:43.952  fused_ordering(1008)
00:16:43.952  fused_ordering(1009)
00:16:43.952  fused_ordering(1010)
00:16:43.952  fused_ordering(1011)
00:16:43.952  fused_ordering(1012)
00:16:43.952  fused_ordering(1013)
00:16:43.952  fused_ordering(1014)
00:16:43.952  fused_ordering(1015)
00:16:43.952  fused_ordering(1016)
00:16:43.952  fused_ordering(1017)
00:16:43.952  fused_ordering(1018)
00:16:43.952  fused_ordering(1019)
00:16:43.952  fused_ordering(1020)
00:16:43.952  fused_ordering(1021)
00:16:43.952  fused_ordering(1022)
00:16:43.952  fused_ordering(1023)
00:16:43.952   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT
00:16:43.952   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini
00:16:43.952   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup
00:16:43.952   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync
00:16:43.952   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:16:43.952   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e
00:16:43.952   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20}
00:16:43.952   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:16:43.952  rmmod nvme_tcp
00:16:43.952  rmmod nvme_fabrics
00:16:43.952  rmmod nvme_keyring
00:16:43.952   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:16:43.952   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e
00:16:43.952   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0
00:16:43.952   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 93271 ']'
00:16:43.952   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 93271
00:16:43.952   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 93271 ']'
00:16:43.952   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 93271
00:16:43.952    19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname
00:16:43.953   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:43.953    19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93271
00:16:43.953   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:16:43.953   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:16:43.953  killing process with pid 93271
00:16:43.953   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93271'
00:16:43.953   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 93271
00:16:43.953   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 93271
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # remove_spdk_ns
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:44.211   19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:16:44.211    19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:44.211   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # return 0
00:16:44.211  
00:16:44.211  real	0m4.016s
00:16:44.211  user	0m4.529s
00:16:44.211  sys	0m1.279s
00:16:44.211   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:44.211  ************************************
00:16:44.211   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:16:44.211  END TEST nvmf_fused_ordering
00:16:44.211  ************************************
00:16:44.470   19:01:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp
00:16:44.470   19:01:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:16:44.470   19:01:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:44.470   19:01:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:16:44.470  ************************************
00:16:44.470  START TEST nvmf_ns_masking
00:16:44.470  ************************************
00:16:44.470   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp
00:16:44.470  * Looking for test storage...
00:16:44.470  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:16:44.470     19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version
00:16:44.470     19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-:
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-:
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<'
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 ))
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:44.470     19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1
00:16:44.470     19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1
00:16:44.470     19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:44.470     19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1
00:16:44.470     19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2
00:16:44.470     19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2
00:16:44.470     19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:44.470     19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:16:44.470  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:44.470  		--rc genhtml_branch_coverage=1
00:16:44.470  		--rc genhtml_function_coverage=1
00:16:44.470  		--rc genhtml_legend=1
00:16:44.470  		--rc geninfo_all_blocks=1
00:16:44.470  		--rc geninfo_unexecuted_blocks=1
00:16:44.470  		
00:16:44.470  		'
00:16:44.470    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:16:44.470  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:44.470  		--rc genhtml_branch_coverage=1
00:16:44.470  		--rc genhtml_function_coverage=1
00:16:44.470  		--rc genhtml_legend=1
00:16:44.471  		--rc geninfo_all_blocks=1
00:16:44.471  		--rc geninfo_unexecuted_blocks=1
00:16:44.471  		
00:16:44.471  		'
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:16:44.471  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:44.471  		--rc genhtml_branch_coverage=1
00:16:44.471  		--rc genhtml_function_coverage=1
00:16:44.471  		--rc genhtml_legend=1
00:16:44.471  		--rc geninfo_all_blocks=1
00:16:44.471  		--rc geninfo_unexecuted_blocks=1
00:16:44.471  		
00:16:44.471  		'
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:16:44.471  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:44.471  		--rc genhtml_branch_coverage=1
00:16:44.471  		--rc genhtml_function_coverage=1
00:16:44.471  		--rc genhtml_legend=1
00:16:44.471  		--rc geninfo_all_blocks=1
00:16:44.471  		--rc geninfo_unexecuted_blocks=1
00:16:44.471  		
00:16:44.471  		'
00:16:44.471   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:16:44.471     19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:16:44.471     19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:16:44.471     19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob
00:16:44.471     19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:16:44.471     19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:16:44.471     19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:16:44.471      19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:44.471      19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:44.471      19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:44.471      19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH
00:16:44.471      19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:16:44.471  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0
00:16:44.471   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:44.471   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock
00:16:44.471   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen
00:16:44.471   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=aa55c511-5b46-4cdc-b5c4-ab86cc9435dd
00:16:44.471    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen
00:16:44.471   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=6205df57-7264-4c97-97b3-84d152ffe2db
00:16:44.471   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1
00:16:44.471   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1
00:16:44.472   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2
00:16:44.472    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=40b2fdcf-51cd-4742-a445-a60ee7022b25
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:16:44.730    19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@460 -- # nvmf_veth_init
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:16:44.730  Cannot find device "nvmf_init_br"
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:16:44.730  Cannot find device "nvmf_init_br2"
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true
00:16:44.730   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:16:44.730  Cannot find device "nvmf_tgt_br"
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # true
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:16:44.731  Cannot find device "nvmf_tgt_br2"
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # true
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:16:44.731  Cannot find device "nvmf_init_br"
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # true
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:16:44.731  Cannot find device "nvmf_init_br2"
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # true
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:16:44.731  Cannot find device "nvmf_tgt_br"
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # true
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:16:44.731  Cannot find device "nvmf_tgt_br2"
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # true
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:16:44.731  Cannot find device "nvmf_br"
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # true
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:16:44.731  Cannot find device "nvmf_init_if"
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # true
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:16:44.731  Cannot find device "nvmf_init_if2"
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # true
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:16:44.731  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # true
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:16:44.731  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # true
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:16:44.731   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:16:44.989   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:16:44.989   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:16:44.989   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:16:44.989   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:16:44.989   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:16:44.989   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:16:44.989   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:16:44.989   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:16:44.989   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:16:44.989   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:16:44.989   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:16:44.990  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:16:44.990  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms
00:16:44.990  
00:16:44.990  --- 10.0.0.3 ping statistics ---
00:16:44.990  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:44.990  rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:16:44.990  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:16:44.990  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms
00:16:44.990  
00:16:44.990  --- 10.0.0.4 ping statistics ---
00:16:44.990  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:44.990  rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:16:44.990  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:16:44.990  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms
00:16:44.990  
00:16:44.990  --- 10.0.0.1 ping statistics ---
00:16:44.990  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:44.990  rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:16:44.990  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:16:44.990  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms
00:16:44.990  
00:16:44.990  --- 10.0.0.2 ping statistics ---
00:16:44.990  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:44.990  rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@461 -- # return 0
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=93560
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 93560
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 93560 ']'
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100
00:16:44.990  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable
00:16:44.990   19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x
00:16:45.249  [2024-12-13 19:01:16.824717] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:16:45.249  [2024-12-13 19:01:16.824796] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:16:45.249  [2024-12-13 19:01:16.981076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:45.249  [2024-12-13 19:01:17.021986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:16:45.249  [2024-12-13 19:01:17.022040] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:16:45.249  [2024-12-13 19:01:17.022054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:16:45.249  [2024-12-13 19:01:17.022064] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:16:45.249  [2024-12-13 19:01:17.022085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:16:45.249  [2024-12-13 19:01:17.022544] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:45.507   19:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:45.507   19:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0
00:16:45.507   19:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:16:45.507   19:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable
00:16:45.507   19:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x
00:16:45.507   19:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:16:45.507   19:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:16:45.765  [2024-12-13 19:01:17.473346] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:16:45.765   19:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64
00:16:45.765   19:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512
00:16:45.765   19:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1
00:16:46.023  Malloc1
00:16:46.023   19:01:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2
00:16:46.281  Malloc2
00:16:46.539   19:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:16:46.797   19:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1
00:16:47.055   19:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:16:47.055  [2024-12-13 19:01:18.865830] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:16:47.313   19:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect
00:16:47.313   19:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 40b2fdcf-51cd-4742-a445-a60ee7022b25 -a 10.0.0.3 -s 4420 -i 4
00:16:47.313   19:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME
00:16:47.313   19:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0
00:16:47.313   19:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:16:47.313   19:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:16:47.313   19:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2
00:16:49.213   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:16:49.213    19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:16:49.213    19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:16:49.213   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:16:49.213   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:16:49.213   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0
00:16:49.213    19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json
00:16:49.213    19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name'
00:16:49.471   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0
00:16:49.471   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]]
00:16:49.471   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1
00:16:49.471   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:16:49.471   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:16:49.471  [   0]:0x1
00:16:49.471    19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:16:49.471    19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:16:49.471   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c2de22b8baf54574a215cfafb9e3e74f
00:16:49.471   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c2de22b8baf54574a215cfafb9e3e74f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:16:49.471   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2
00:16:49.729   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1
00:16:49.729   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:16:49.729   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:16:49.729  [   0]:0x1
00:16:49.729    19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:16:49.729    19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:16:49.729   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c2de22b8baf54574a215cfafb9e3e74f
00:16:49.729   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c2de22b8baf54574a215cfafb9e3e74f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:16:49.729   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2
00:16:49.729   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:16:49.729   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:16:49.729  [   1]:0x2
00:16:49.729    19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:16:49.729    19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:16:49.987   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=66dab05492664645a77bfdb08521cd71
00:16:49.987   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 66dab05492664645a77bfdb08521cd71 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:16:49.987   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect
00:16:49.987   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:16:49.987  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:49.987   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:16:50.245   19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible
00:16:50.503   19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1
00:16:50.504   19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 40b2fdcf-51cd-4742-a445-a60ee7022b25 -a 10.0.0.3 -s 4420 -i 4
00:16:50.504   19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1
00:16:50.504   19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0
00:16:50.504   19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:16:50.504   19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]]
00:16:50.504   19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1
00:16:50.504   19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2
00:16:53.039   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:16:53.039    19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:16:53.039    19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:16:53.039   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:16:53.039   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:16:53.039   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0
00:16:53.039    19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json
00:16:53.039    19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name'
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]]
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:53.040    19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:16:53.040    19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:16:53.040    19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:16:53.040  [   0]:0x2
00:16:53.040    19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:16:53.040    19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=66dab05492664645a77bfdb08521cd71
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 66dab05492664645a77bfdb08521cd71 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:16:53.040  [   0]:0x1
00:16:53.040    19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:16:53.040    19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c2de22b8baf54574a215cfafb9e3e74f
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c2de22b8baf54574a215cfafb9e3e74f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:16:53.040  [   1]:0x2
00:16:53.040    19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:16:53.040    19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=66dab05492664645a77bfdb08521cd71
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 66dab05492664645a77bfdb08521cd71 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:16:53.040   19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1
00:16:53.412   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1
00:16:53.412   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:16:53.412   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1
00:16:53.412   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible
00:16:53.412   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:53.412    19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible
00:16:53.412   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:53.412   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1
00:16:53.412   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:16:53.412   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:16:53.412    19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:16:53.412    19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:16:53.412   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000
00:16:53.412   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:16:53.412   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:16:53.412   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:53.412   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:53.412   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:53.412   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2
00:16:53.412   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:16:53.412   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:16:53.412  [   0]:0x2
00:16:53.412    19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:16:53.412    19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:16:53.412   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=66dab05492664645a77bfdb08521cd71
00:16:53.412   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 66dab05492664645a77bfdb08521cd71 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:16:53.412   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect
00:16:53.412   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:16:53.690  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:53.691   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1
00:16:53.948   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2
00:16:53.948   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 40b2fdcf-51cd-4742-a445-a60ee7022b25 -a 10.0.0.3 -s 4420 -i 4
00:16:53.948   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2
00:16:53.948   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0
00:16:53.948   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:16:53.948   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]]
00:16:53.948   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2
00:16:53.948   19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2
00:16:55.847   19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:16:55.847    19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:16:55.847    19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:16:55.847   19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2
00:16:55.847   19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:16:55.847   19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0
00:16:55.847    19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json
00:16:55.847    19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name'
00:16:56.105   19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0
00:16:56.105   19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]]
00:16:56.105   19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1
00:16:56.105   19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:16:56.105   19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:16:56.105  [   0]:0x1
00:16:56.105    19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:16:56.105    19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:16:56.105   19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c2de22b8baf54574a215cfafb9e3e74f
00:16:56.105   19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c2de22b8baf54574a215cfafb9e3e74f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:16:56.105   19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2
00:16:56.105   19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:16:56.105   19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:16:56.105  [   1]:0x2
00:16:56.105    19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:16:56.105    19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:16:56.106   19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=66dab05492664645a77bfdb08521cd71
00:16:56.106   19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 66dab05492664645a77bfdb08521cd71 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:16:56.106   19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1
00:16:56.363   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1
00:16:56.363   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:16:56.363   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1
00:16:56.363   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible
00:16:56.363   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:56.363    19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible
00:16:56.363   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:56.363   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1
00:16:56.363   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:16:56.363   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:16:56.363    19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:16:56.363    19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:16:56.363   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000
00:16:56.363   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:16:56.364   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:16:56.364   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:56.364   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:56.364   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:56.364   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2
00:16:56.364   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:16:56.364   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:16:56.364  [   0]:0x2
00:16:56.621    19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:16:56.621    19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:16:56.621   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=66dab05492664645a77bfdb08521cd71
00:16:56.621   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 66dab05492664645a77bfdb08521cd71 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:16:56.621   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1
00:16:56.621   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:16:56.622   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1
00:16:56.622   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:56.622   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:56.622    19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:56.622   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:56.622    19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:56.622   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:56.622   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:56.622   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:16:56.622   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1
00:16:56.880  [2024-12-13 19:01:28.515110] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2
00:16:56.880  2024/12/13 19:01:28 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters
00:16:56.880  request:
00:16:56.880  {
00:16:56.880    "method": "nvmf_ns_remove_host",
00:16:56.880    "params": {
00:16:56.880      "nqn": "nqn.2016-06.io.spdk:cnode1",
00:16:56.880      "nsid": 2,
00:16:56.880      "host": "nqn.2016-06.io.spdk:host1"
00:16:56.880    }
00:16:56.880  }
00:16:56.880  Got JSON-RPC error response
00:16:56.880  GoRPCClient: error on JSON-RPC call
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:56.880    19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:16:56.880    19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:16:56.880    19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:16:56.880  [   0]:0x2
00:16:56.880    19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:16:56.880    19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=66dab05492664645a77bfdb08521cd71
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 66dab05492664645a77bfdb08521cd71 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect
00:16:56.880   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:16:57.138  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:57.138   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=93933
00:16:57.138   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2
00:16:57.138   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT
00:16:57.138   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 93933 /var/tmp/host.sock
00:16:57.138   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 93933 ']'
00:16:57.138   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock
00:16:57.138   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100
00:16:57.138   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...'
00:16:57.138  Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...
00:16:57.138   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable
00:16:57.138   19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x
00:16:57.138  [2024-12-13 19:01:28.782941] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:16:57.138  [2024-12-13 19:01:28.783043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93933 ]
00:16:57.138  [2024-12-13 19:01:28.937861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:57.396  [2024-12-13 19:01:28.974466] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:16:57.653   19:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:57.653   19:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0
00:16:57.653   19:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:16:57.911   19:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:16:58.169    19:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid aa55c511-5b46-4cdc-b5c4-ab86cc9435dd
00:16:58.169    19:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d -
00:16:58.169   19:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g AA55C5115B464CDCB5C4AB86CC9435DD -i
00:16:58.169    19:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 6205df57-7264-4c97-97b3-84d152ffe2db
00:16:58.169    19:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d -
00:16:58.169   19:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 6205DF5772644C9797B384D152FFE2DB -i
00:16:58.427   19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1
00:16:58.685   19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2
00:16:58.943   19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0
00:16:58.943   19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0
00:16:59.509  nvme0n1
00:16:59.509   19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1
00:16:59.509   19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1
00:16:59.509  nvme1n2
00:16:59.767    19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs
00:16:59.767    19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name'
00:16:59.767    19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs
00:16:59.767    19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort
00:16:59.767    19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs
00:16:59.767   19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]]
00:16:59.767    19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1
00:16:59.767    19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid'
00:16:59.767    19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1
00:17:00.334   19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ aa55c511-5b46-4cdc-b5c4-ab86cc9435dd == \a\a\5\5\c\5\1\1\-\5\b\4\6\-\4\c\d\c\-\b\5\c\4\-\a\b\8\6\c\c\9\4\3\5\d\d ]]
00:17:00.334    19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2
00:17:00.334    19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2
00:17:00.334    19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid'
00:17:00.593   19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 6205df57-7264-4c97-97b3-84d152ffe2db == \6\2\0\5\d\f\5\7\-\7\2\6\4\-\4\c\9\7\-\9\7\b\3\-\8\4\d\1\5\2\f\f\e\2\d\b ]]
00:17:00.593   19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:17:00.852   19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:17:00.852    19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid aa55c511-5b46-4cdc-b5c4-ab86cc9435dd
00:17:00.852    19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d -
00:17:00.852   19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g AA55C5115B464CDCB5C4AB86CC9435DD
00:17:00.852   19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:17:00.852   19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g AA55C5115B464CDCB5C4AB86CC9435DD
00:17:00.852   19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:17:00.852   19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:00.852    19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:17:00.852   19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:00.852    19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:17:00.852   19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:00.852   19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:17:00.852   19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:17:00.852   19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g AA55C5115B464CDCB5C4AB86CC9435DD
00:17:01.420  [2024-12-13 19:01:32.944974] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid
00:17:01.420  [2024-12-13 19:01:32.945180] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19
00:17:01.420  [2024-12-13 19:01:32.945201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:17:01.420  2024/12/13 19:01:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:invalid hide_metadata:%!s(bool=false) nguid:AA55C5115B464CDCB5C4AB86CC9435DD no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:17:01.420  request:
00:17:01.420  {
00:17:01.420    "method": "nvmf_subsystem_add_ns",
00:17:01.420    "params": {
00:17:01.420      "nqn": "nqn.2016-06.io.spdk:cnode1",
00:17:01.420      "namespace": {
00:17:01.420        "bdev_name": "invalid",
00:17:01.420        "nsid": 1,
00:17:01.420        "nguid": "AA55C5115B464CDCB5C4AB86CC9435DD",
00:17:01.420        "no_auto_visible": false,
00:17:01.420        "hide_metadata": false
00:17:01.420      }
00:17:01.420    }
00:17:01.420  }
00:17:01.420  Got JSON-RPC error response
00:17:01.420  GoRPCClient: error on JSON-RPC call
00:17:01.420   19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:17:01.420   19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:01.420   19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:01.420   19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:01.420    19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid aa55c511-5b46-4cdc-b5c4-ab86cc9435dd
00:17:01.420    19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d -
00:17:01.420   19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g AA55C5115B464CDCB5C4AB86CC9435DD -i
00:17:01.679   19:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s
00:17:03.581    19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length
00:17:03.581    19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs
00:17:03.581    19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs
00:17:03.838   19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 ))
00:17:03.838   19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 93933
00:17:03.838   19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 93933 ']'
00:17:03.838   19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 93933
00:17:03.838    19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname
00:17:03.838   19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:03.838    19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93933
00:17:03.838   19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:17:03.838  killing process with pid 93933
00:17:03.838   19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:17:03.838   19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93933'
00:17:03.838   19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 93933
00:17:03.838   19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 93933
00:17:04.405   19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:17:04.405   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT
00:17:04.405   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini
00:17:04.405   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup
00:17:04.405   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync
00:17:04.664   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:17:04.664   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e
00:17:04.664   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20}
00:17:04.664   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:17:04.664  rmmod nvme_tcp
00:17:04.664  rmmod nvme_fabrics
00:17:04.664  rmmod nvme_keyring
00:17:04.664   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:17:04.664   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e
00:17:04.664   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0
00:17:04.664   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 93560 ']'
00:17:04.664   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 93560
00:17:04.664   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 93560 ']'
00:17:04.664   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 93560
00:17:04.664    19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname
00:17:04.664   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:04.664    19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93560
00:17:04.664   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:17:04.664  killing process with pid 93560
00:17:04.664   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:17:04.664   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93560'
00:17:04.664   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 93560
00:17:04.664   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 93560
00:17:04.923   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:17:04.923   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:17:04.923   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:17:04.923   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr
00:17:04.923   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save
00:17:04.923   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore
00:17:04.923   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:17:04.923   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:17:04.923   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:17:04.923   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:17:04.923   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:17:04.923   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:17:04.923   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:17:04.923   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:17:04.923   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:17:04.923   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:17:04.923   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:17:04.923   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:17:04.923   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:17:04.923   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:17:04.923   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:17:04.923   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:17:05.182   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # remove_spdk_ns
00:17:05.182   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:17:05.182   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:17:05.182    19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:17:05.182   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # return 0
00:17:05.182  
00:17:05.182  real	0m20.719s
00:17:05.182  user	0m34.566s
00:17:05.182  sys	0m3.148s
00:17:05.182   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:05.182   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x
00:17:05.182  ************************************
00:17:05.182  END TEST nvmf_ns_masking
00:17:05.182  ************************************
00:17:05.182   19:01:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]]
00:17:05.182   19:01:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]]
00:17:05.182   19:01:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp
00:17:05.182   19:01:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:17:05.182   19:01:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:05.182   19:01:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:17:05.182  ************************************
00:17:05.182  START TEST nvmf_vfio_user
00:17:05.182  ************************************
00:17:05.182   19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp
00:17:05.182  * Looking for test storage...
00:17:05.182  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:17:05.182    19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:17:05.182     19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version
00:17:05.182     19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-:
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-:
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<'
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 ))
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:05.445     19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1
00:17:05.445     19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1
00:17:05.445     19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:05.445     19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1
00:17:05.445     19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2
00:17:05.445     19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2
00:17:05.445     19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:17:05.445     19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:17:05.445  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:05.445  		--rc genhtml_branch_coverage=1
00:17:05.445  		--rc genhtml_function_coverage=1
00:17:05.445  		--rc genhtml_legend=1
00:17:05.445  		--rc geninfo_all_blocks=1
00:17:05.445  		--rc geninfo_unexecuted_blocks=1
00:17:05.445  		
00:17:05.445  		'
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:17:05.445  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:05.445  		--rc genhtml_branch_coverage=1
00:17:05.445  		--rc genhtml_function_coverage=1
00:17:05.445  		--rc genhtml_legend=1
00:17:05.445  		--rc geninfo_all_blocks=1
00:17:05.445  		--rc geninfo_unexecuted_blocks=1
00:17:05.445  		
00:17:05.445  		'
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:17:05.445  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:05.445  		--rc genhtml_branch_coverage=1
00:17:05.445  		--rc genhtml_function_coverage=1
00:17:05.445  		--rc genhtml_legend=1
00:17:05.445  		--rc geninfo_all_blocks=1
00:17:05.445  		--rc geninfo_unexecuted_blocks=1
00:17:05.445  		
00:17:05.445  		'
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:17:05.445  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:05.445  		--rc genhtml_branch_coverage=1
00:17:05.445  		--rc genhtml_function_coverage=1
00:17:05.445  		--rc genhtml_legend=1
00:17:05.445  		--rc geninfo_all_blocks=1
00:17:05.445  		--rc geninfo_unexecuted_blocks=1
00:17:05.445  		
00:17:05.445  		'
00:17:05.445   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:17:05.445     19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:17:05.445     19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:17:05.445    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:17:05.446    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:17:05.446    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:17:05.446    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:17:05.446    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:17:05.446     19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob
00:17:05.446     19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:17:05.446     19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:17:05.446     19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:17:05.446      19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:05.446      19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:05.446      19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:05.446      19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH
00:17:05.446      19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:05.446    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0
00:17:05.446    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:17:05.446    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:17:05.446    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:17:05.446    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:17:05.446    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:17:05.446    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:17:05.446  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:17:05.446    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:17:05.446    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:17:05.446    19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0
00:17:05.446   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64
00:17:05.446   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:17:05.446   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2
00:17:05.446   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:17:05.446   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER
00:17:05.446   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER
00:17:05.446   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user
00:17:05.446   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' ''
00:17:05.446   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=
00:17:05.446   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args=
00:17:05.446   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=94285
00:17:05.446  Process pid: 94285
00:17:05.446   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 94285'
00:17:05.446   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT
00:17:05.446   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 94285
00:17:05.446   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]'
00:17:05.446   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 94285 ']'
00:17:05.446   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:05.446   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:05.446  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:05.446   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:05.446   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:05.446   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x
00:17:05.446  [2024-12-13 19:01:37.119705] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:17:05.446  [2024-12-13 19:01:37.119801] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:17:05.705  [2024-12-13 19:01:37.268431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:17:05.705  [2024-12-13 19:01:37.302882] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:17:05.705  [2024-12-13 19:01:37.302948] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:17:05.705  [2024-12-13 19:01:37.302958] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:17:05.705  [2024-12-13 19:01:37.302965] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:17:05.705  [2024-12-13 19:01:37.302971] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:17:05.705  [2024-12-13 19:01:37.304020] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:17:05.705  [2024-12-13 19:01:37.304143] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:17:05.705  [2024-12-13 19:01:37.305081] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:17:05.705  [2024-12-13 19:01:37.305123] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:17:05.705   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:17:05.705   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0
00:17:05.705   19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1
00:17:06.640   19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER
00:17:06.898   19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user
00:17:06.898    19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2
00:17:07.156   19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES)
00:17:07.156   19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1
00:17:07.156   19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1
00:17:07.414  Malloc1
00:17:07.414   19:01:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1
00:17:07.672   19:01:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1
00:17:07.930   19:01:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0
00:17:08.188   19:01:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES)
00:17:08.188   19:01:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2
00:17:08.188   19:01:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2
00:17:08.446  Malloc2
00:17:08.446   19:01:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2
00:17:08.704   19:01:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2
00:17:08.963   19:01:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0
00:17:08.963   19:01:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user
00:17:08.963    19:01:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2
00:17:09.223   19:01:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES)
00:17:09.223   19:01:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1
00:17:09.223   19:01:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1
00:17:09.223   19:01:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci
00:17:09.223  [2024-12-13 19:01:40.808712] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:17:09.223  [2024-12-13 19:01:40.808749] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94408 ]
00:17:09.223  [2024-12-13 19:01:40.956329] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1
00:17:09.223  [2024-12-13 19:01:40.961723] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32
00:17:09.223  [2024-12-13 19:01:40.961772] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f39e32f7000
00:17:09.223  [2024-12-13 19:01:40.962721] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:17:09.223  [2024-12-13 19:01:40.963713] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:17:09.223  [2024-12-13 19:01:40.964715] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:17:09.223  [2024-12-13 19:01:40.965723] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0
00:17:09.223  [2024-12-13 19:01:40.966720] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:17:09.223  [2024-12-13 19:01:40.967724] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:17:09.223  [2024-12-13 19:01:40.968720] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:17:09.223  [2024-12-13 19:01:40.969726] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:17:09.223  [2024-12-13 19:01:40.970730] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32
00:17:09.223  [2024-12-13 19:01:40.970771] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f39e285a000
00:17:09.223  [2024-12-13 19:01:40.972150] vfio_user_pci.c:  65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000
00:17:09.223  [2024-12-13 19:01:40.991807] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully
00:17:09.223  [2024-12-13 19:01:40.991859] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout)
00:17:09.223  [2024-12-13 19:01:40.994815] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff
00:17:09.223  [2024-12-13 19:01:40.994881] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192
00:17:09.223  [2024-12-13 19:01:40.994970] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout)
00:17:09.223  [2024-12-13 19:01:40.994992] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout)
00:17:09.223  [2024-12-13 19:01:40.994998] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout)
00:17:09.223  [2024-12-13 19:01:40.995815] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300
00:17:09.223  [2024-12-13 19:01:40.995853] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout)
00:17:09.223  [2024-12-13 19:01:40.995863] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout)
00:17:09.223  [2024-12-13 19:01:40.996818] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff
00:17:09.223  [2024-12-13 19:01:40.996854] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout)
00:17:09.223  [2024-12-13 19:01:40.996864] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms)
00:17:09.223  [2024-12-13 19:01:40.997821] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0
00:17:09.223  [2024-12-13 19:01:40.997845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms)
00:17:09.223  [2024-12-13 19:01:40.998826] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0
00:17:09.223  [2024-12-13 19:01:40.998863] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0
00:17:09.223  [2024-12-13 19:01:40.998870] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms)
00:17:09.223  [2024-12-13 19:01:40.998879] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms)
00:17:09.223  [2024-12-13 19:01:40.998989] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1
00:17:09.223  [2024-12-13 19:01:40.998995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms)
00:17:09.223  [2024-12-13 19:01:40.999000] nvme_vfio_user.c:  61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000
00:17:09.223  [2024-12-13 19:01:40.999834] nvme_vfio_user.c:  61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000
00:17:09.223  [2024-12-13 19:01:41.000830] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff
00:17:09.223  [2024-12-13 19:01:41.001838] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001
00:17:09.223  [2024-12-13 19:01:41.002828] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:17:09.223  [2024-12-13 19:01:41.002924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms)
00:17:09.223  [2024-12-13 19:01:41.003843] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1
00:17:09.223  [2024-12-13 19:01:41.003875] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready
00:17:09.223  [2024-12-13 19:01:41.003881] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms)
00:17:09.223  [2024-12-13 19:01:41.003901] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout)
00:17:09.223  [2024-12-13 19:01:41.003912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms)
00:17:09.223  [2024-12-13 19:01:41.003926] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:17:09.223  [2024-12-13 19:01:41.003932] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:17:09.223  [2024-12-13 19:01:41.003936] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:17:09.223  [2024-12-13 19:01:41.003949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:17:09.224  [2024-12-13 19:01:41.003992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0
00:17:09.224  [2024-12-13 19:01:41.004002] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072
00:17:09.224  [2024-12-13 19:01:41.004007] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072
00:17:09.224  [2024-12-13 19:01:41.004011] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001
00:17:09.224  [2024-12-13 19:01:41.004016] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000
00:17:09.224  [2024-12-13 19:01:41.004021] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1
00:17:09.224  [2024-12-13 19:01:41.004025] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1
00:17:09.224  [2024-12-13 19:01:41.004030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms)
00:17:09.224  [2024-12-13 19:01:41.004057] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms)
00:17:09.224  [2024-12-13 19:01:41.004070] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0
00:17:09.224  [2024-12-13 19:01:41.004083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0
00:17:09.224  [2024-12-13 19:01:41.004094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:09.224  [2024-12-13 19:01:41.004103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:09.224  [2024-12-13 19:01:41.004112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:09.224  [2024-12-13 19:01:41.004120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:09.224  [2024-12-13 19:01:41.004125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms)
00:17:09.224  [2024-12-13 19:01:41.004136] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms)
00:17:09.224  [2024-12-13 19:01:41.004145] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0
00:17:09.224  [2024-12-13 19:01:41.004156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0
00:17:09.224  [2024-12-13 19:01:41.004162] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms
00:17:09.224  [2024-12-13 19:01:41.004167] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms)
00:17:09.224  [2024-12-13 19:01:41.004175] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms)
00:17:09.224  [2024-12-13 19:01:41.004181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms)
00:17:09.224  [2024-12-13 19:01:41.004190] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:17:09.224  [2024-12-13 19:01:41.004202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0
00:17:09.224  [2024-12-13 19:01:41.004270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms)
00:17:09.224  [2024-12-13 19:01:41.004285] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms)
00:17:09.224  [2024-12-13 19:01:41.004294] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096
00:17:09.224  [2024-12-13 19:01:41.004299] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000
00:17:09.224  [2024-12-13 19:01:41.004302] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:17:09.224  [2024-12-13 19:01:41.004309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0
00:17:09.224  [2024-12-13 19:01:41.004324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0
00:17:09.224  [2024-12-13 19:01:41.004335] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added
00:17:09.224  [2024-12-13 19:01:41.004349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms)
00:17:09.224  [2024-12-13 19:01:41.004358] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms)
00:17:09.224  [2024-12-13 19:01:41.004366] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:17:09.224  [2024-12-13 19:01:41.004370] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:17:09.224  [2024-12-13 19:01:41.004374] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:17:09.224  [2024-12-13 19:01:41.004380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:17:09.224  [2024-12-13 19:01:41.004401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0
00:17:09.224  [2024-12-13 19:01:41.004415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms)
00:17:09.224  [2024-12-13 19:01:41.004425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms)
00:17:09.224  [2024-12-13 19:01:41.004432] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:17:09.224  [2024-12-13 19:01:41.004437] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:17:09.224  [2024-12-13 19:01:41.004440] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:17:09.224  [2024-12-13 19:01:41.004446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:17:09.224  [2024-12-13 19:01:41.004459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0
00:17:09.224  [2024-12-13 19:01:41.004468] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms)
00:17:09.224  [2024-12-13 19:01:41.004476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms)
00:17:09.224  [2024-12-13 19:01:41.004485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms)
00:17:09.224  [2024-12-13 19:01:41.004491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms)
00:17:09.224  [2024-12-13 19:01:41.004496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms)
00:17:09.224  [2024-12-13 19:01:41.004502] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms)
00:17:09.224  [2024-12-13 19:01:41.004507] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID
00:17:09.224  [2024-12-13 19:01:41.004512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms)
00:17:09.224  [2024-12-13 19:01:41.004517] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout)
00:17:09.224  [2024-12-13 19:01:41.004544] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0
00:17:09.224  [2024-12-13 19:01:41.004560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0
00:17:09.224  [2024-12-13 19:01:41.004574] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0
00:17:09.224  [2024-12-13 19:01:41.004585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0
00:17:09.224  [2024-12-13 19:01:41.004598] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0
00:17:09.224  [2024-12-13 19:01:41.004616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0
00:17:09.224  [2024-12-13 19:01:41.004628] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:17:09.224  [2024-12-13 19:01:41.004639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0
00:17:09.224  [2024-12-13 19:01:41.004657] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192
00:17:09.224  [2024-12-13 19:01:41.004663] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000
00:17:09.224  [2024-12-13 19:01:41.004667] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000
00:17:09.224  [2024-12-13 19:01:41.004670] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000
00:17:09.224  [2024-12-13 19:01:41.004674] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2
00:17:09.224  [2024-12-13 19:01:41.004680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000
00:17:09.224  [2024-12-13 19:01:41.004687] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512
00:17:09.224  [2024-12-13 19:01:41.004692] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000
00:17:09.224  [2024-12-13 19:01:41.004695] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:17:09.224  [2024-12-13 19:01:41.004701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0
00:17:09.225  [2024-12-13 19:01:41.004708] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512
00:17:09.225  [2024-12-13 19:01:41.004713] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:17:09.225  [2024-12-13 19:01:41.004716] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:17:09.225  [2024-12-13 19:01:41.004722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:17:09.225  [2024-12-13 19:01:41.004729] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096
00:17:09.225  [2024-12-13 19:01:41.004734] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000
00:17:09.225  [2024-12-13 19:01:41.004737] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:17:09.225  [2024-12-13 19:01:41.004743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0
00:17:09.225  =====================================================
00:17:09.225  NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1
00:17:09.225  =====================================================
00:17:09.225  Controller Capabilities/Features
00:17:09.225  ================================
00:17:09.225  Vendor ID:                             4e58
00:17:09.225  Subsystem Vendor ID:                   4e58
00:17:09.225  Serial Number:                         SPDK1
00:17:09.225  Model Number:                          SPDK bdev Controller
00:17:09.225  Firmware Version:                      25.01
00:17:09.225  Recommended Arb Burst:                 6
00:17:09.225  IEEE OUI Identifier:                   8d 6b 50
00:17:09.225  Multi-path I/O
00:17:09.225    May have multiple subsystem ports:   Yes
00:17:09.225    May have multiple controllers:       Yes
00:17:09.225    Associated with SR-IOV VF:           No
00:17:09.225  Max Data Transfer Size:                131072
00:17:09.225  Max Number of Namespaces:              32
00:17:09.225  Max Number of I/O Queues:              127
00:17:09.225  NVMe Specification Version (VS):       1.3
00:17:09.225  NVMe Specification Version (Identify): 1.3
00:17:09.225  Maximum Queue Entries:                 256
00:17:09.225  Contiguous Queues Required:            Yes
00:17:09.225  Arbitration Mechanisms Supported
00:17:09.225    Weighted Round Robin:                Not Supported
00:17:09.225    Vendor Specific:                     Not Supported
00:17:09.225  Reset Timeout:                         15000 ms
00:17:09.225  Doorbell Stride:                       4 bytes
00:17:09.225  NVM Subsystem Reset:                   Not Supported
00:17:09.225  Command Sets Supported
00:17:09.225    NVM Command Set:                     Supported
00:17:09.225  Boot Partition:                        Not Supported
00:17:09.225  Memory Page Size Minimum:              4096 bytes
00:17:09.225  Memory Page Size Maximum:              4096 bytes
00:17:09.225  Persistent Memory Region:              Not Supported
00:17:09.225  Optional Asynchronous Events Supported
00:17:09.225    Namespace Attribute Notices:         Supported
00:17:09.225    Firmware Activation Notices:         Not Supported
00:17:09.225    ANA Change Notices:                  Not Supported
00:17:09.225    PLE Aggregate Log Change Notices:    Not Supported
00:17:09.225    LBA Status Info Alert Notices:       Not Supported
00:17:09.225    EGE Aggregate Log Change Notices:    Not Supported
00:17:09.225    Normal NVM Subsystem Shutdown event: Not Supported
00:17:09.225    Zone Descriptor Change Notices:      Not Supported
00:17:09.225    Discovery Log Change Notices:        Not Supported
00:17:09.225  Controller Attributes
00:17:09.225    128-bit Host Identifier:             Supported
00:17:09.225    Non-Operational Permissive Mode:     Not Supported
00:17:09.225    NVM Sets:                            Not Supported
00:17:09.225    Read Recovery Levels:                Not Supported
00:17:09.225    Endurance Groups:                    Not Supported
00:17:09.225    Predictable Latency Mode:            Not Supported
00:17:09.225    Traffic Based Keep ALive:            Not Supported
00:17:09.225    Namespace Granularity:               Not Supported
00:17:09.225    SQ Associations:                     Not Supported
00:17:09.225    UUID List:                           Not Supported
00:17:09.225    Multi-Domain Subsystem:              Not Supported
00:17:09.225    Fixed Capacity Management:           Not Supported
00:17:09.225    Variable Capacity Management:        Not Supported
00:17:09.225    Delete Endurance Group:              Not Supported
00:17:09.225    Delete NVM Set:                      Not Supported
00:17:09.225    Extended LBA Formats Supported:      Not Supported
00:17:09.225    Flexible Data Placement Supported:   Not Supported
00:17:09.225  
00:17:09.225  Controller Memory Buffer Support
00:17:09.225  ================================
00:17:09.225  Supported:                             No
00:17:09.225  
00:17:09.225  Persistent Memory Region Support
00:17:09.225  ================================
00:17:09.225  Supported:                             No
00:17:09.225  
00:17:09.225  Admin Command Set Attributes
00:17:09.225  ============================
00:17:09.225  Security Send/Receive:                 Not Supported
00:17:09.225  Format NVM:                            Not Supported
00:17:09.225  Firmware Activate/Download:            Not Supported
00:17:09.225  Namespace Management:                  Not Supported
00:17:09.225  Device Self-Test:                      Not Supported
00:17:09.225  Directives:                            Not Supported
00:17:09.225  NVMe-MI:                               Not Supported
00:17:09.225  Virtualization Management:             Not Supported
00:17:09.225  Doorbell Buffer Config:                Not Supported
00:17:09.225  Get LBA Status Capability:             Not Supported
00:17:09.225  Command & Feature Lockdown Capability: Not Supported
00:17:09.225  Abort Command Limit:                   4
00:17:09.225  Async Event Request Limit:             4
00:17:09.225  Number of Firmware Slots:              N/A
00:17:09.225  Firmware Slot 1 Read-Only:             N/A
00:17:09.225  Firmware Activation Witho[2024-12-13 19:01:41.004750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0
00:17:09.225  [2024-12-13 19:01:41.004770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0
00:17:09.225  [2024-12-13 19:01:41.004793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0
00:17:09.225  [2024-12-13 19:01:41.004801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0
00:17:09.225  ut Reset:     N/A
00:17:09.225  Multiple Update Detection Support:     N/A
00:17:09.225  Firmware Update Granularity:           No Information Provided
00:17:09.225  Per-Namespace SMART Log:               No
00:17:09.225  Asymmetric Namespace Access Log Page:  Not Supported
00:17:09.225  Subsystem NQN:                         nqn.2019-07.io.spdk:cnode1
00:17:09.225  Command Effects Log Page:              Supported
00:17:09.225  Get Log Page Extended Data:            Supported
00:17:09.225  Telemetry Log Pages:                   Not Supported
00:17:09.225  Persistent Event Log Pages:            Not Supported
00:17:09.225  Supported Log Pages Log Page:          May Support
00:17:09.225  Commands Supported & Effects Log Page: Not Supported
00:17:09.225  Feature Identifiers & Effects Log Page:May Support
00:17:09.225  NVMe-MI Commands & Effects Log Page:   May Support
00:17:09.225  Data Area 4 for Telemetry Log:         Not Supported
00:17:09.225  Error Log Page Entries Supported:      128
00:17:09.225  Keep Alive:                            Supported
00:17:09.225  Keep Alive Granularity:                10000 ms
00:17:09.225  
00:17:09.225  NVM Command Set Attributes
00:17:09.225  ==========================
00:17:09.225  Submission Queue Entry Size
00:17:09.225    Max:                       64
00:17:09.225    Min:                       64
00:17:09.225  Completion Queue Entry Size
00:17:09.225    Max:                       16
00:17:09.225    Min:                       16
00:17:09.225  Number of Namespaces:        32
00:17:09.225  Compare Command:             Supported
00:17:09.225  Write Uncorrectable Command: Not Supported
00:17:09.225  Dataset Management Command:  Supported
00:17:09.225  Write Zeroes Command:        Supported
00:17:09.225  Set Features Save Field:     Not Supported
00:17:09.225  Reservations:                Not Supported
00:17:09.225  Timestamp:                   Not Supported
00:17:09.225  Copy:                        Supported
00:17:09.225  Volatile Write Cache:        Present
00:17:09.225  Atomic Write Unit (Normal):  1
00:17:09.225  Atomic Write Unit (PFail):   1
00:17:09.225  Atomic Compare & Write Unit: 1
00:17:09.225  Fused Compare & Write:       Supported
00:17:09.225  Scatter-Gather List
00:17:09.226    SGL Command Set:           Supported (Dword aligned)
00:17:09.226    SGL Keyed:                 Not Supported
00:17:09.226    SGL Bit Bucket Descriptor: Not Supported
00:17:09.226    SGL Metadata Pointer:      Not Supported
00:17:09.226    Oversized SGL:             Not Supported
00:17:09.226    SGL Metadata Address:      Not Supported
00:17:09.226    SGL Offset:                Not Supported
00:17:09.226    Transport SGL Data Block:  Not Supported
00:17:09.226  Replay Protected Memory Block:  Not Supported
00:17:09.226  
00:17:09.226  Firmware Slot Information
00:17:09.226  =========================
00:17:09.226  Active slot:                 1
00:17:09.226  Slot 1 Firmware Revision:    25.01
00:17:09.226  
00:17:09.226  
00:17:09.226  Commands Supported and Effects
00:17:09.226  ==============================
00:17:09.226  Admin Commands
00:17:09.226  --------------
00:17:09.226                    Get Log Page (02h): Supported 
00:17:09.226                        Identify (06h): Supported 
00:17:09.226                           Abort (08h): Supported 
00:17:09.226                    Set Features (09h): Supported 
00:17:09.226                    Get Features (0Ah): Supported 
00:17:09.226      Asynchronous Event Request (0Ch): Supported 
00:17:09.226                      Keep Alive (18h): Supported 
00:17:09.226  I/O Commands
00:17:09.226  ------------
00:17:09.226                           Flush (00h): Supported LBA-Change 
00:17:09.226                           Write (01h): Supported LBA-Change 
00:17:09.226                            Read (02h): Supported 
00:17:09.226                         Compare (05h): Supported 
00:17:09.226                    Write Zeroes (08h): Supported LBA-Change 
00:17:09.226              Dataset Management (09h): Supported LBA-Change 
00:17:09.226                            Copy (19h): Supported LBA-Change 
00:17:09.226  
00:17:09.226  Error Log
00:17:09.226  =========
00:17:09.226  
00:17:09.226  Arbitration
00:17:09.226  ===========
00:17:09.226  Arbitration Burst:           1
00:17:09.226  
00:17:09.226  Power Management
00:17:09.226  ================
00:17:09.226  Number of Power States:          1
00:17:09.226  Current Power State:             Power State #0
00:17:09.226  Power State #0:
00:17:09.226    Max Power:                      0.00 W
00:17:09.226    Non-Operational State:         Operational
00:17:09.226    Entry Latency:                 Not Reported
00:17:09.226    Exit Latency:                  Not Reported
00:17:09.226    Relative Read Throughput:      0
00:17:09.226    Relative Read Latency:         0
00:17:09.226    Relative Write Throughput:     0
00:17:09.226    Relative Write Latency:        0
00:17:09.226    Idle Power:                     Not Reported
00:17:09.226    Active Power:                   Not Reported
00:17:09.226  Non-Operational Permissive Mode: Not Supported
00:17:09.226  
00:17:09.226  Health Information
00:17:09.226  ==================
00:17:09.226  Critical Warnings:
00:17:09.226    Available Spare Space:     OK
00:17:09.226    Temperature:               OK
00:17:09.226    Device Reliability:        OK
00:17:09.226    Read Only:                 No
00:17:09.226    Volatile Memory Backup:    OK
00:17:09.226  Current Temperature:         0 Kelvin (-273 Celsius)
00:17:09.226  Temperature Threshold:       0 Kelvin (-273 Celsius)
00:17:09.226  Available Spare:             0%
00:17:09.226  Available Sp[2024-12-13 19:01:41.004925] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0
00:17:09.226  [2024-12-13 19:01:41.004936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0
00:17:09.226  [2024-12-13 19:01:41.004970] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD
00:17:09.226  [2024-12-13 19:01:41.004981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:09.226  [2024-12-13 19:01:41.004989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:09.226  [2024-12-13 19:01:41.004995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:09.226  [2024-12-13 19:01:41.005002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:09.226  [2024-12-13 19:01:41.007285] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001
00:17:09.226  [2024-12-13 19:01:41.007327] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001
00:17:09.226  [2024-12-13 19:01:41.007851] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:17:09.226  [2024-12-13 19:01:41.007927] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us
00:17:09.226  [2024-12-13 19:01:41.007934] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms
00:17:09.226  [2024-12-13 19:01:41.008860] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9
00:17:09.226  [2024-12-13 19:01:41.008902] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds
00:17:09.226  [2024-12-13 19:01:41.008959] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl
00:17:09.226  [2024-12-13 19:01:41.012248] vfio_user_pci.c:  96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000
00:17:09.484  are Threshold:   0%
00:17:09.484  Life Percentage Used:        0%
00:17:09.484  Data Units Read:             0
00:17:09.484  Data Units Written:          0
00:17:09.484  Host Read Commands:          0
00:17:09.484  Host Write Commands:         0
00:17:09.484  Controller Busy Time:        0 minutes
00:17:09.484  Power Cycles:                0
00:17:09.484  Power On Hours:              0 hours
00:17:09.484  Unsafe Shutdowns:            0
00:17:09.484  Unrecoverable Media Errors:  0
00:17:09.484  Lifetime Error Log Entries:  0
00:17:09.484  Warning Temperature Time:    0 minutes
00:17:09.484  Critical Temperature Time:   0 minutes
00:17:09.484  
00:17:09.484  Number of Queues
00:17:09.484  ================
00:17:09.484  Number of I/O Submission Queues:      127
00:17:09.484  Number of I/O Completion Queues:      127
00:17:09.484  
00:17:09.484  Active Namespaces
00:17:09.484  =================
00:17:09.484  Namespace ID:1
00:17:09.484  Error Recovery Timeout:                Unlimited
00:17:09.484  Command Set Identifier:                NVM (00h)
00:17:09.484  Deallocate:                            Supported
00:17:09.484  Deallocated/Unwritten Error:           Not Supported
00:17:09.484  Deallocated Read Value:                Unknown
00:17:09.484  Deallocate in Write Zeroes:            Not Supported
00:17:09.484  Deallocated Guard Field:               0xFFFF
00:17:09.484  Flush:                                 Supported
00:17:09.484  Reservation:                           Supported
00:17:09.484  Namespace Sharing Capabilities:        Multiple Controllers
00:17:09.484  Size (in LBAs):                        131072 (0GiB)
00:17:09.484  Capacity (in LBAs):                    131072 (0GiB)
00:17:09.484  Utilization (in LBAs):                 131072 (0GiB)
00:17:09.484  NGUID:                                 3DE11ECF76E84CD18F72BB0C5C1F2E40
00:17:09.484  UUID:                                  3de11ecf-76e8-4cd1-8f72-bb0c5c1f2e40
00:17:09.484  Thin Provisioning:                     Not Supported
00:17:09.484  Per-NS Atomic Units:                   Yes
00:17:09.484    Atomic Boundary Size (Normal):       0
00:17:09.484    Atomic Boundary Size (PFail):        0
00:17:09.484    Atomic Boundary Offset:              0
00:17:09.484  Maximum Single Source Range Length:    65535
00:17:09.484  Maximum Copy Length:                   65535
00:17:09.484  Maximum Source Range Count:            1
00:17:09.484  NGUID/EUI64 Never Reused:              No
00:17:09.484  Namespace Write Protected:             No
00:17:09.484  Number of LBA Formats:                 1
00:17:09.484  Current LBA Format:                    LBA Format #00
00:17:09.484  LBA Format #00: Data Size:   512  Metadata Size:     0
00:17:09.484  
00:17:09.484   19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2
00:17:09.742  [2024-12-13 19:01:41.341151] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:17:15.009  Initializing NVMe Controllers
00:17:15.009  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1
00:17:15.009  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1
00:17:15.009  Initialization complete. Launching workers.
00:17:15.009  ========================================================
00:17:15.009                                                                                                           Latency(us)
00:17:15.009  Device Information                                                   :       IOPS      MiB/s    Average        min        max
00:17:15.009  VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core  1:   38082.47     148.76    3360.94    1065.21   10366.22
00:17:15.009  ========================================================
00:17:15.009  Total                                                                :   38082.47     148.76    3360.94    1065.21   10366.22
00:17:15.009  
00:17:15.009  [2024-12-13 19:01:46.349729] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:17:15.009   19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2
00:17:15.009  [2024-12-13 19:01:46.682703] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:17:20.273  Initializing NVMe Controllers
00:17:20.273  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1
00:17:20.273  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1
00:17:20.273  Initialization complete. Launching workers.
00:17:20.273  ========================================================
00:17:20.273                                                                                                           Latency(us)
00:17:20.273  Device Information                                                   :       IOPS      MiB/s    Average        min        max
00:17:20.273  VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core  1:   16076.38      62.80    7972.61    4940.19   15718.73
00:17:20.273  ========================================================
00:17:20.273  Total                                                                :   16076.38      62.80    7972.61    4940.19   15718.73
00:17:20.273  
00:17:20.273  [2024-12-13 19:01:51.708136] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:17:20.273   19:01:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE
00:17:20.273  [2024-12-13 19:01:51.992758] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:17:25.584  [2024-12-13 19:01:57.064593] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:17:25.584  Initializing NVMe Controllers
00:17:25.584  Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1
00:17:25.584  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1
00:17:25.584  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1
00:17:25.584  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2
00:17:25.584  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3
00:17:25.584  Initialization complete. Launching workers.
00:17:25.584  Starting thread on core 2
00:17:25.584  Starting thread on core 3
00:17:25.584  Starting thread on core 1
00:17:25.584   19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g
00:17:25.842  [2024-12-13 19:01:57.418015] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:17:29.124  [2024-12-13 19:02:00.468976] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:17:29.124  Initializing NVMe Controllers
00:17:29.124  Attaching to /var/run/vfio-user/domain/vfio-user1/1
00:17:29.124  Attached to /var/run/vfio-user/domain/vfio-user1/1
00:17:29.124  Associating SPDK bdev Controller (SPDK1               ) with lcore 0
00:17:29.124  Associating SPDK bdev Controller (SPDK1               ) with lcore 1
00:17:29.124  Associating SPDK bdev Controller (SPDK1               ) with lcore 2
00:17:29.124  Associating SPDK bdev Controller (SPDK1               ) with lcore 3
00:17:29.124  /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration:
00:17:29.124  /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1
00:17:29.124  Initialization complete. Launching workers.
00:17:29.124  Starting thread on core 1 with urgent priority queue
00:17:29.124  Starting thread on core 2 with urgent priority queue
00:17:29.124  Starting thread on core 3 with urgent priority queue
00:17:29.124  Starting thread on core 0 with urgent priority queue
00:17:29.124  SPDK bdev Controller (SPDK1               ) core 0:  7601.67 IO/s    13.16 secs/100000 ios
00:17:29.124  SPDK bdev Controller (SPDK1               ) core 1:  8493.33 IO/s    11.77 secs/100000 ios
00:17:29.125  SPDK bdev Controller (SPDK1               ) core 2:  7405.33 IO/s    13.50 secs/100000 ios
00:17:29.125  SPDK bdev Controller (SPDK1               ) core 3:  8236.67 IO/s    12.14 secs/100000 ios
00:17:29.125  ========================================================
00:17:29.125  
00:17:29.125   19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1'
00:17:29.125  [2024-12-13 19:02:00.819094] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:17:29.125  Initializing NVMe Controllers
00:17:29.125  Attaching to /var/run/vfio-user/domain/vfio-user1/1
00:17:29.125  Attached to /var/run/vfio-user/domain/vfio-user1/1
00:17:29.125    Namespace ID: 1 size: 0GB
00:17:29.125  Initialization complete.
00:17:29.125  INFO: using host memory buffer for IO
00:17:29.125  Hello world!
00:17:29.125  [2024-12-13 19:02:00.850838] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:17:29.125   19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1'
00:17:29.383  [2024-12-13 19:02:01.190385] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:17:30.757  Initializing NVMe Controllers
00:17:30.757  Attaching to /var/run/vfio-user/domain/vfio-user1/1
00:17:30.757  Attached to /var/run/vfio-user/domain/vfio-user1/1
00:17:30.757  Initialization complete. Launching workers.
00:17:30.757  submit (in ns)   avg, min, max =   6497.9,   3251.8, 4055918.2
00:17:30.757  complete (in ns) avg, min, max =  24360.7,   1898.2, 7028669.1
00:17:30.757  
00:17:30.757  Submit histogram
00:17:30.757  ================
00:17:30.757         Range in us     Cumulative     Count
00:17:30.757      3.244 -     3.258:    0.0204%  (        3)
00:17:30.757      3.258 -     3.273:    0.0408%  (        3)
00:17:30.757      3.273 -     3.287:    0.2109%  (       25)
00:17:30.757      3.287 -     3.302:    0.9048%  (      102)
00:17:30.757      3.302 -     3.316:    2.4626%  (      229)
00:17:30.757      3.316 -     3.331:    6.1769%  (      546)
00:17:30.757      3.331 -     3.345:   11.0136%  (      711)
00:17:30.757      3.345 -     3.360:   17.0136%  (      882)
00:17:30.757      3.360 -     3.375:   25.3605%  (     1227)
00:17:30.757      3.375 -     3.389:   34.2789%  (     1311)
00:17:30.757      3.389 -     3.404:   44.5306%  (     1507)
00:17:30.757      3.404 -     3.418:   53.2721%  (     1285)
00:17:30.757      3.418 -     3.433:   60.2789%  (     1030)
00:17:30.757      3.433 -     3.447:   64.1293%  (      566)
00:17:30.757      3.447 -     3.462:   67.8163%  (      542)
00:17:30.757      3.462 -     3.476:   70.7823%  (      436)
00:17:30.757      3.476 -     3.491:   73.6395%  (      420)
00:17:30.757      3.491 -     3.505:   75.9320%  (      337)
00:17:30.757      3.505 -     3.520:   77.9524%  (      297)
00:17:30.757      3.520 -     3.535:   79.2925%  (      197)
00:17:30.758      3.535 -     3.549:   80.2789%  (      145)
00:17:30.758      3.549 -     3.564:   81.4218%  (      168)
00:17:30.758      3.564 -     3.578:   82.0544%  (       93)
00:17:30.758      3.578 -     3.593:   82.7891%  (      108)
00:17:30.758      3.593 -     3.607:   83.5102%  (      106)
00:17:30.758      3.607 -     3.622:   84.4218%  (      134)
00:17:30.758      3.622 -     3.636:   84.9728%  (       81)
00:17:30.758      3.636 -     3.651:   85.7755%  (      118)
00:17:30.758      3.651 -     3.665:   86.4830%  (      104)
00:17:30.758      3.665 -     3.680:   87.4626%  (      144)
00:17:30.758      3.680 -     3.695:   88.7279%  (      186)
00:17:30.758      3.695 -     3.709:   89.8707%  (      168)
00:17:30.758      3.709 -     3.724:   91.3810%  (      222)
00:17:30.758      3.724 -     3.753:   93.1905%  (      266)
00:17:30.758      3.753 -     3.782:   94.4014%  (      178)
00:17:30.758      3.782 -     3.811:   95.5034%  (      162)
00:17:30.758      3.811 -     3.840:   96.2993%  (      117)
00:17:30.758      3.840 -     3.869:   96.9728%  (       99)
00:17:30.758      3.869 -     3.898:   97.4218%  (       66)
00:17:30.758      3.898 -     3.927:   97.6531%  (       34)
00:17:30.758      3.927 -     3.956:   97.8435%  (       28)
00:17:30.758      3.956 -     3.985:   98.0408%  (       29)
00:17:30.758      3.985 -     4.015:   98.1973%  (       23)
00:17:30.758      4.015 -     4.044:   98.2925%  (       14)
00:17:30.758      4.044 -     4.073:   98.3605%  (       10)
00:17:30.758      4.073 -     4.102:   98.4762%  (       17)
00:17:30.758      4.102 -     4.131:   98.5238%  (        7)
00:17:30.758      4.131 -     4.160:   98.6190%  (       14)
00:17:30.758      4.160 -     4.189:   98.7007%  (       12)
00:17:30.758      4.189 -     4.218:   98.7415%  (        6)
00:17:30.758      4.218 -     4.247:   98.7551%  (        2)
00:17:30.758      4.247 -     4.276:   98.7823%  (        4)
00:17:30.758      4.276 -     4.305:   98.8163%  (        5)
00:17:30.758      4.305 -     4.335:   98.8367%  (        3)
00:17:30.758      4.335 -     4.364:   98.8503%  (        2)
00:17:30.758      4.364 -     4.393:   98.8571%  (        1)
00:17:30.758      4.393 -     4.422:   98.8776%  (        3)
00:17:30.758      4.422 -     4.451:   98.8912%  (        2)
00:17:30.758      4.451 -     4.480:   98.9048%  (        2)
00:17:30.758      4.480 -     4.509:   98.9184%  (        2)
00:17:30.758      4.509 -     4.538:   98.9252%  (        1)
00:17:30.758      4.538 -     4.567:   98.9320%  (        1)
00:17:30.758      4.567 -     4.596:   98.9388%  (        1)
00:17:30.758      4.625 -     4.655:   98.9456%  (        1)
00:17:30.758      4.655 -     4.684:   98.9592%  (        2)
00:17:30.758      4.771 -     4.800:   98.9660%  (        1)
00:17:30.758      4.887 -     4.916:   98.9728%  (        1)
00:17:30.758      4.975 -     5.004:   98.9864%  (        2)
00:17:30.758      5.091 -     5.120:   98.9932%  (        1)
00:17:30.758      5.120 -     5.149:   99.0000%  (        1)
00:17:30.758      5.382 -     5.411:   99.0068%  (        1)
00:17:30.758      5.818 -     5.847:   99.0136%  (        1)
00:17:30.758      7.185 -     7.215:   99.0204%  (        1)
00:17:30.758      7.564 -     7.622:   99.0272%  (        1)
00:17:30.758      7.622 -     7.680:   99.0340%  (        1)
00:17:30.758      7.913 -     7.971:   99.0476%  (        2)
00:17:30.758      7.971 -     8.029:   99.0544%  (        1)
00:17:30.758      8.029 -     8.087:   99.0612%  (        1)
00:17:30.758      8.087 -     8.145:   99.0680%  (        1)
00:17:30.758      8.204 -     8.262:   99.0816%  (        2)
00:17:30.758      8.262 -     8.320:   99.1020%  (        3)
00:17:30.758      8.320 -     8.378:   99.1156%  (        2)
00:17:30.758      8.378 -     8.436:   99.1224%  (        1)
00:17:30.758      8.436 -     8.495:   99.1293%  (        1)
00:17:30.758      8.553 -     8.611:   99.1361%  (        1)
00:17:30.758      8.611 -     8.669:   99.1429%  (        1)
00:17:30.758      8.669 -     8.727:   99.1497%  (        1)
00:17:30.758      8.727 -     8.785:   99.1565%  (        1)
00:17:30.758      8.785 -     8.844:   99.1701%  (        2)
00:17:30.758      8.844 -     8.902:   99.1837%  (        2)
00:17:30.758      8.902 -     8.960:   99.1905%  (        1)
00:17:30.758      8.960 -     9.018:   99.2041%  (        2)
00:17:30.758      9.018 -     9.076:   99.2313%  (        4)
00:17:30.758      9.135 -     9.193:   99.2517%  (        3)
00:17:30.758      9.193 -     9.251:   99.2789%  (        4)
00:17:30.758      9.251 -     9.309:   99.2993%  (        3)
00:17:30.758      9.309 -     9.367:   99.3061%  (        1)
00:17:30.758      9.367 -     9.425:   99.3129%  (        1)
00:17:30.758      9.425 -     9.484:   99.3197%  (        1)
00:17:30.758      9.484 -     9.542:   99.3333%  (        2)
00:17:30.758      9.600 -     9.658:   99.3401%  (        1)
00:17:30.758      9.658 -     9.716:   99.3469%  (        1)
00:17:30.758      9.775 -     9.833:   99.3537%  (        1)
00:17:30.758     10.182 -    10.240:   99.3605%  (        1)
00:17:30.758     10.356 -    10.415:   99.3673%  (        1)
00:17:30.758     10.647 -    10.705:   99.3741%  (        1)
00:17:30.758     10.705 -    10.764:   99.3810%  (        1)
00:17:30.758     12.276 -    12.335:   99.3878%  (        1)
00:17:30.758     12.858 -    12.916:   99.3946%  (        1)
00:17:30.758     13.091 -    13.149:   99.4014%  (        1)
00:17:30.758     13.789 -    13.847:   99.4082%  (        1)
00:17:30.758     14.429 -    14.487:   99.4150%  (        1)
00:17:30.758     14.895 -    15.011:   99.4218%  (        1)
00:17:30.758     17.687 -    17.804:   99.4354%  (        2)
00:17:30.758     17.804 -    17.920:   99.4762%  (        6)
00:17:30.758     17.920 -    18.036:   99.5306%  (        8)
00:17:30.758     18.036 -    18.153:   99.5442%  (        2)
00:17:30.758     18.153 -    18.269:   99.5918%  (        7)
00:17:30.758     18.269 -    18.385:   99.6122%  (        3)
00:17:30.758     18.385 -    18.502:   99.6599%  (        7)
00:17:30.758     18.502 -    18.618:   99.6735%  (        2)
00:17:30.758     18.618 -    18.735:   99.6939%  (        3)
00:17:30.758     18.735 -    18.851:   99.7075%  (        2)
00:17:30.758     18.851 -    18.967:   99.7143%  (        1)
00:17:30.758     18.967 -    19.084:   99.7279%  (        2)
00:17:30.758     19.084 -    19.200:   99.7551%  (        4)
00:17:30.758     19.200 -    19.316:   99.7959%  (        6)
00:17:30.758     19.316 -    19.433:   99.8163%  (        3)
00:17:30.758     19.433 -    19.549:   99.8299%  (        2)
00:17:30.758     19.549 -    19.665:   99.8367%  (        1)
00:17:30.758     19.665 -    19.782:   99.8571%  (        3)
00:17:30.758     19.782 -    19.898:   99.8776%  (        3)
00:17:30.758     19.898 -    20.015:   99.8912%  (        2)
00:17:30.758     20.247 -    20.364:   99.9116%  (        3)
00:17:30.758     21.644 -    21.760:   99.9184%  (        1)
00:17:30.758     36.538 -    36.771:   99.9252%  (        1)
00:17:30.758   3038.487 -  3053.382:   99.9320%  (        1)
00:17:30.758   3932.160 -  3961.949:   99.9388%  (        1)
00:17:30.758   3991.738 -  4021.527:   99.9932%  (        8)
00:17:30.758   4051.316 -  4081.105:  100.0000%  (        1)
00:17:30.758  
00:17:30.758  Complete histogram
00:17:30.758  ==================
00:17:30.758         Range in us     Cumulative     Count
00:17:30.758      1.891 -     1.905:    0.5782%  (       85)
00:17:30.758      1.905 -     1.920:   22.2517%  (     3186)
00:17:30.758      1.920 -     1.935:   57.6463%  (     5203)
00:17:30.758      1.935 -     1.949:   60.3605%  (      399)
00:17:30.758      1.949 -     1.964:   61.1361%  (      114)
00:17:30.758      1.964 -     1.978:   68.0612%  (     1018)
00:17:30.758      1.978 -     1.993:   83.6531%  (     2292)
00:17:30.758      1.993 -     2.007:   84.4694%  (      120)
00:17:30.758      2.007 -     2.022:   84.8435%  (       55)
00:17:30.758      2.022 -     2.036:   86.0340%  (      175)
00:17:30.758      2.036 -     2.051:   91.2993%  (      774)
00:17:30.758      2.051 -     2.065:   92.2381%  (      138)
00:17:30.758      2.065 -     2.080:   92.4150%  (       26)
00:17:30.758      2.080 -     2.095:   92.6259%  (       31)
00:17:30.758      2.095 -     2.109:   93.7415%  (      164)
00:17:30.758      2.109 -     2.124:   95.5102%  (      260)
00:17:30.758      2.124 -     2.138:   95.6327%  (       18)
00:17:30.758      2.138 -     2.153:   95.6939%  (        9)
00:17:30.758      2.153 -     2.167:   95.7959%  (       15)
00:17:30.758      2.167 -     2.182:   96.2177%  (       62)
00:17:30.758      2.182 -     2.196:   97.0680%  (      125)
00:17:30.758      2.196 -     2.211:   97.1633%  (       14)
00:17:30.758      2.211 -     2.225:   97.2041%  (        6)
00:17:30.758      2.225 -     2.240:   97.2789%  (       11)
00:17:30.758      2.240 -     2.255:   97.4898%  (       31)
00:17:30.758      2.255 -     2.269:   98.3401%  (      125)
00:17:30.758      2.269 -     2.284:   98.4422%  (       15)
00:17:30.758      2.284 -     2.298:   98.4558%  (        2)
00:17:30.758      2.298 -     2.313:   98.4762%  (        3)
00:17:30.758      2.313 -     2.327:   98.5578%  (       12)
00:17:30.758      2.327 -     2.342:   98.5850%  (        4)
00:17:30.758      2.342 -     2.356:   98.5918%  (        1)
00:17:30.758      2.371 -     2.385:   98.5986%  (        1)
00:17:30.758      2.385 -     2.400:   98.6054%  (        1)
00:17:30.758      2.531 -     2.545:   98.6122%  (        1)
00:17:30.758      2.545 -     2.560:   98.6259%  (        2)
00:17:30.758      2.676 -     2.691:   98.6327%  (        1)
00:17:30.758      2.807 -     2.822:   98.6395%  (        1)
00:17:30.758      2.895 -     2.909:   98.6463%  (        1)
00:17:30.758      2.953 -     2.967:   98.6531%  (        1)
00:17:30.758      3.200 -     3.215:   98.6599%  (        1)
00:17:30.758      3.273 -     3.287:   98.6667%  (        1)
00:17:30.758      3.287 -     3.302:   98.6735%  (        1)
00:17:30.758      3.316 -     3.331:   98.6803%  (        1)
00:17:30.758      3.331 -     3.345:   98.6939%  (        2)
00:17:30.758      3.345 -     3.360:   98.7075%  (        2)
00:17:30.758      3.389 -     3.404:   98.7143%  (        1)
00:17:30.758      3.404 -     3.418:   98.7279%  (        2)
00:17:30.758      3.476 -     3.491:   98.7347%  (        1)
00:17:30.758      3.505 -     3.520:   98.7415%  (        1)
00:17:30.758      3.520 -     3.535:   98.7551%  (        2)
00:17:30.758      3.535 -     3.549:   98.7619%  (        1)
00:17:30.758      3.564 -     3.578:   98.7687%  (        1)
00:17:30.758      3.593 -     3.607:   98.7755%  (        1)
00:17:30.758      3.636 -     3.651:   98.7823%  (        1)
00:17:30.758      3.651 -     3.665:   98.7891%  (        1)
00:17:30.758      3.782 -     3.811:   98.8027%  (        2)
00:17:30.758      3.811 -     3.840:   98.8095%  (        1)
00:17:30.758      3.840 -     3.869:   98.8231%  (        2)
00:17:30.758      3.898 -     3.927:   98.8299%  (        1)
00:17:30.758      4.015 -     4.044:   98.8367%  (        1)
00:17:30.758      4.160 -     4.189:   98.8435%  (        1)
00:17:30.758      4.189 -     4.218:   98.8503%  (        1)
00:17:30.758      4.218 -     4.247:   98.8571%  (        1)
00:17:30.758      4.276 -     4.305:   98.8639%  (        1)
00:17:30.758      4.567 -     4.596:   98.8707%  (        1)
00:17:30.758      4.713 -     4.742:   98.8776%  (        1)
00:17:30.758      5.149 -     5.178:   98.8844%  (        1)
00:17:30.758      6.342 -     6.371:   98.8912%  (        1)
00:17:30.758      6.429 -     6.458:   98.8980%  (        1)
00:17:30.759      6.487 -     6.516:   98.9048%  (        1)
00:17:30.759      6.516 -     6.545:   98.9116%  (        1)
00:17:30.759      6.691 -     6.720:   98.9184%  (        1)
00:17:30.759      6.720 -     6.749:   98.9252%  (        1)
00:17:30.759      6.778 -     6.807:   98.9320%  (        1)
00:17:30.759      6.807 -     6.836:   98.9388%  (        1)
00:17:30.759      6.865 -     6.895:   98.9456%  (        1)
00:17:30.759      7.127 -     7.156:   98.9524%  (        1)
00:17:30.759      7.505 -     7.564:   98.9592%  (        1)
00:17:30.759      7.564 -     7.622:   98.9660%  (        1)
00:17:30.759      7.622 -     7.680:   98.9796%  (        2)
00:17:30.759      7.680 -     7.738:   99.0000%  (        3)
00:17:30.759      7.738 -     7.796:   99.0136%  (        2)
00:17:30.759      7.913 -     7.971:   99.0204%  (        1)
00:17:30.759      7.971 -     8.029:   99.0272%  (        1)
00:17:30.759      8.029 -     8.087:   99.0340%  (        1)
00:17:30.759      8.145 -     8.204:   99.0476%  (        2)
00:17:30.759      8.262 -     8.320:   99.0544%  (        1)
00:17:30.759      8.378 -     8.436:   99.0612%  (        1)
00:17:30.759      8.902 -     8.960:   99.0680%  (        1)
00:17:30.759      9.193 -     9.251:   99.0748%  (        1)
00:17:30.759      9.542 -     9.600:   99.0816%  (        1)
00:17:30.759     10.007 -    10.065:   99.0884%  (        1)
00:17:30.759     11.287 -    11.345:   99.0952%  (        1)
00:17:30.759     12.160 -    12.218:   99.1020%  (        1)
00:17:30.759     13.324 -    13.382:   99.1088%  (        1)
00:17:30.759     13.731 -    13.789:   99.1156%  (        1)
00:17:30.759     16.291 -    16.407:   99.1224%  (        1)
00:17:30.759     16.407 -    16.524:   99.1293%  (        1)
00:17:30.759     16.524 -    16.640:   99.1701%  (        6)
00:17:30.759     16.640 -    16.756:   99.1837%  (        2)
00:17:30.759     16.756 -    16.873:   99.2041%  (        3)
00:17:30.759     16.873 -    16.989:   99.2177%  (        2)
00:17:30.759     16.989 -    17.105:   99.2245%  (        1)
00:17:30.759     17.105 -    17.222:   99.2313%  (        1)
00:17:30.759     17.222 -    17.338:   99.2449%  (        2)
00:17:30.759     17.338 -    17.455:   99.2517%  (        1)
00:17:30.759     17.455 -    17.571:   99.2789%  (        4)
00:17:30.759     17.571 -    17.687:   99.3469%  (       10)
00:17:30.759     17.804 -    17.920:   99.3605%  (        2)
00:17:30.759     17.920 -    18.036:   99.3741%  (        2)
00:17:30.759     18.036 -    18.153:   9[2024-12-13 19:02:02.207061] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:17:30.759  9.3946%  (        3)
00:17:30.759     18.385 -    18.502:   99.4014%  (        1)
00:17:30.759     18.735 -    18.851:   99.4082%  (        1)
00:17:30.759     19.084 -    19.200:   99.4150%  (        1)
00:17:30.759     29.091 -    29.207:   99.4218%  (        1)
00:17:30.759     51.898 -    52.131:   99.4286%  (        1)
00:17:30.759   3023.593 -  3038.487:   99.4490%  (        3)
00:17:30.759   3038.487 -  3053.382:   99.4966%  (        7)
00:17:30.759   3068.276 -  3083.171:   99.5034%  (        1)
00:17:30.759   3098.065 -  3112.960:   99.5238%  (        3)
00:17:30.759   3932.160 -  3961.949:   99.5442%  (        3)
00:17:30.759   3961.949 -  3991.738:   99.6395%  (       14)
00:17:30.759   3991.738 -  4021.527:   99.8571%  (       32)
00:17:30.759   4021.527 -  4051.316:   99.9728%  (       17)
00:17:30.759   4051.316 -  4081.105:   99.9796%  (        1)
00:17:30.759   4081.105 -  4110.895:   99.9864%  (        1)
00:17:30.759   6047.185 -  6076.975:   99.9932%  (        1)
00:17:30.759   7000.436 -  7030.225:  100.0000%  (        1)
00:17:30.759  
00:17:30.759   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1
00:17:30.759   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1
00:17:30.759   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1
00:17:30.759   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3
00:17:30.759   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems
00:17:30.759  [
00:17:30.759    {
00:17:30.759      "allow_any_host": true,
00:17:30.759      "hosts": [],
00:17:30.759      "listen_addresses": [],
00:17:30.759      "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:17:30.759      "subtype": "Discovery"
00:17:30.759    },
00:17:30.759    {
00:17:30.759      "allow_any_host": true,
00:17:30.759      "hosts": [],
00:17:30.759      "listen_addresses": [
00:17:30.759        {
00:17:30.759          "adrfam": "IPv4",
00:17:30.759          "traddr": "/var/run/vfio-user/domain/vfio-user1/1",
00:17:30.759          "trsvcid": "0",
00:17:30.759          "trtype": "VFIOUSER"
00:17:30.759        }
00:17:30.759      ],
00:17:30.759      "max_cntlid": 65519,
00:17:30.759      "max_namespaces": 32,
00:17:30.759      "min_cntlid": 1,
00:17:30.759      "model_number": "SPDK bdev Controller",
00:17:30.759      "namespaces": [
00:17:30.759        {
00:17:30.759          "bdev_name": "Malloc1",
00:17:30.759          "name": "Malloc1",
00:17:30.759          "nguid": "3DE11ECF76E84CD18F72BB0C5C1F2E40",
00:17:30.759          "nsid": 1,
00:17:30.759          "uuid": "3de11ecf-76e8-4cd1-8f72-bb0c5c1f2e40"
00:17:30.759        }
00:17:30.759      ],
00:17:30.759      "nqn": "nqn.2019-07.io.spdk:cnode1",
00:17:30.759      "serial_number": "SPDK1",
00:17:30.759      "subtype": "NVMe"
00:17:30.759    },
00:17:30.759    {
00:17:30.759      "allow_any_host": true,
00:17:30.759      "hosts": [],
00:17:30.759      "listen_addresses": [
00:17:30.759        {
00:17:30.759          "adrfam": "IPv4",
00:17:30.759          "traddr": "/var/run/vfio-user/domain/vfio-user2/2",
00:17:30.759          "trsvcid": "0",
00:17:30.759          "trtype": "VFIOUSER"
00:17:30.759        }
00:17:30.759      ],
00:17:30.759      "max_cntlid": 65519,
00:17:30.759      "max_namespaces": 32,
00:17:30.759      "min_cntlid": 1,
00:17:30.759      "model_number": "SPDK bdev Controller",
00:17:30.759      "namespaces": [
00:17:30.759        {
00:17:30.759          "bdev_name": "Malloc2",
00:17:30.759          "name": "Malloc2",
00:17:30.759          "nguid": "668BAEB19D0F4170ABF7F8B52840DA12",
00:17:30.759          "nsid": 1,
00:17:30.759          "uuid": "668baeb1-9d0f-4170-abf7-f8b52840da12"
00:17:30.759        }
00:17:30.759      ],
00:17:30.759      "nqn": "nqn.2019-07.io.spdk:cnode2",
00:17:30.759      "serial_number": "SPDK2",
00:17:30.759      "subtype": "NVMe"
00:17:30.759    }
00:17:30.759  ]
00:17:30.759   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file
00:17:30.759   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=94659
00:17:30.759   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r '		trtype:VFIOUSER 		traddr:/var/run/vfio-user/domain/vfio-user1/1 		subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file
00:17:30.759   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file
00:17:30.759   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0
00:17:30.759   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:17:30.759   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']'
00:17:30.759   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1
00:17:30.759   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1
00:17:31.018   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:17:31.018   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']'
00:17:31.018   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2
00:17:31.018   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1
00:17:31.018   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:17:31.018   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']'
00:17:31.018   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=3
00:17:31.018   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1
00:17:31.018  [2024-12-13 19:02:02.774193] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:17:31.276   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:17:31.276   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:17:31.276   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0
00:17:31.276   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file
00:17:31.276   19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3
00:17:31.533  Malloc3
00:17:31.533   19:02:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2
00:17:31.792  [2024-12-13 19:02:03.435082] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:17:31.792   19:02:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems
00:17:31.792  Asynchronous Event Request test
00:17:31.792  Attaching to /var/run/vfio-user/domain/vfio-user1/1
00:17:31.792  Attached to /var/run/vfio-user/domain/vfio-user1/1
00:17:31.792  Registering asynchronous event callbacks...
00:17:31.792  Starting namespace attribute notice tests for all controllers...
00:17:31.792  /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00
00:17:31.792  aer_cb - Changed Namespace
00:17:31.792  Cleaning up...
00:17:32.050  [
00:17:32.050    {
00:17:32.050      "allow_any_host": true,
00:17:32.050      "hosts": [],
00:17:32.050      "listen_addresses": [],
00:17:32.050      "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:17:32.050      "subtype": "Discovery"
00:17:32.050    },
00:17:32.051    {
00:17:32.051      "allow_any_host": true,
00:17:32.051      "hosts": [],
00:17:32.051      "listen_addresses": [
00:17:32.051        {
00:17:32.051          "adrfam": "IPv4",
00:17:32.051          "traddr": "/var/run/vfio-user/domain/vfio-user1/1",
00:17:32.051          "trsvcid": "0",
00:17:32.051          "trtype": "VFIOUSER"
00:17:32.051        }
00:17:32.051      ],
00:17:32.051      "max_cntlid": 65519,
00:17:32.051      "max_namespaces": 32,
00:17:32.051      "min_cntlid": 1,
00:17:32.051      "model_number": "SPDK bdev Controller",
00:17:32.051      "namespaces": [
00:17:32.051        {
00:17:32.051          "bdev_name": "Malloc1",
00:17:32.051          "name": "Malloc1",
00:17:32.051          "nguid": "3DE11ECF76E84CD18F72BB0C5C1F2E40",
00:17:32.051          "nsid": 1,
00:17:32.051          "uuid": "3de11ecf-76e8-4cd1-8f72-bb0c5c1f2e40"
00:17:32.051        },
00:17:32.051        {
00:17:32.051          "bdev_name": "Malloc3",
00:17:32.051          "name": "Malloc3",
00:17:32.051          "nguid": "41989928490841C58258B9852ADE069D",
00:17:32.051          "nsid": 2,
00:17:32.051          "uuid": "41989928-4908-41c5-8258-b9852ade069d"
00:17:32.051        }
00:17:32.051      ],
00:17:32.051      "nqn": "nqn.2019-07.io.spdk:cnode1",
00:17:32.051      "serial_number": "SPDK1",
00:17:32.051      "subtype": "NVMe"
00:17:32.051    },
00:17:32.051    {
00:17:32.051      "allow_any_host": true,
00:17:32.051      "hosts": [],
00:17:32.051      "listen_addresses": [
00:17:32.051        {
00:17:32.051          "adrfam": "IPv4",
00:17:32.051          "traddr": "/var/run/vfio-user/domain/vfio-user2/2",
00:17:32.051          "trsvcid": "0",
00:17:32.051          "trtype": "VFIOUSER"
00:17:32.051        }
00:17:32.051      ],
00:17:32.051      "max_cntlid": 65519,
00:17:32.051      "max_namespaces": 32,
00:17:32.051      "min_cntlid": 1,
00:17:32.051      "model_number": "SPDK bdev Controller",
00:17:32.051      "namespaces": [
00:17:32.051        {
00:17:32.051          "bdev_name": "Malloc2",
00:17:32.051          "name": "Malloc2",
00:17:32.051          "nguid": "668BAEB19D0F4170ABF7F8B52840DA12",
00:17:32.051          "nsid": 1,
00:17:32.051          "uuid": "668baeb1-9d0f-4170-abf7-f8b52840da12"
00:17:32.051        }
00:17:32.051      ],
00:17:32.051      "nqn": "nqn.2019-07.io.spdk:cnode2",
00:17:32.051      "serial_number": "SPDK2",
00:17:32.051      "subtype": "NVMe"
00:17:32.051    }
00:17:32.051  ]
00:17:32.051   19:02:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 94659
00:17:32.051   19:02:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES)
00:17:32.051   19:02:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2
00:17:32.051   19:02:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2
00:17:32.051   19:02:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci
00:17:32.051  [2024-12-13 19:02:03.743330] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:17:32.051  [2024-12-13 19:02:03.743388] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94698 ]
00:17:32.311  [2024-12-13 19:02:03.895353] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2
00:17:32.311  [2024-12-13 19:02:03.903456] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32
00:17:32.311  [2024-12-13 19:02:03.903505] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9a77751000
00:17:32.311  [2024-12-13 19:02:03.904455] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:17:32.311  [2024-12-13 19:02:03.905462] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:17:32.311  [2024-12-13 19:02:03.906470] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:17:32.311  [2024-12-13 19:02:03.907478] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0
00:17:32.311  [2024-12-13 19:02:03.908481] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:17:32.311  [2024-12-13 19:02:03.909482] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:17:32.311  [2024-12-13 19:02:03.910487] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:17:32.311  [2024-12-13 19:02:03.911489] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:17:32.311  [2024-12-13 19:02:03.912499] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32
00:17:32.311  [2024-12-13 19:02:03.912538] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9a76931000
00:17:32.311  [2024-12-13 19:02:03.913780] vfio_user_pci.c:  65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000
00:17:32.311  [2024-12-13 19:02:03.927512] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully
00:17:32.311  [2024-12-13 19:02:03.927570] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout)
00:17:32.311  [2024-12-13 19:02:03.932690] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff
00:17:32.311  [2024-12-13 19:02:03.932757] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192
00:17:32.311  [2024-12-13 19:02:03.932843] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout)
00:17:32.311  [2024-12-13 19:02:03.932863] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout)
00:17:32.311  [2024-12-13 19:02:03.932869] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout)
00:17:32.312  [2024-12-13 19:02:03.933700] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300
00:17:32.312  [2024-12-13 19:02:03.933727] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout)
00:17:32.312  [2024-12-13 19:02:03.933737] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout)
00:17:32.312  [2024-12-13 19:02:03.934698] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff
00:17:32.312  [2024-12-13 19:02:03.934736] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout)
00:17:32.312  [2024-12-13 19:02:03.934747] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms)
00:17:32.312  [2024-12-13 19:02:03.935700] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0
00:17:32.312  [2024-12-13 19:02:03.935739] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms)
00:17:32.312  [2024-12-13 19:02:03.936702] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0
00:17:32.312  [2024-12-13 19:02:03.936723] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0
00:17:32.312  [2024-12-13 19:02:03.936746] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms)
00:17:32.312  [2024-12-13 19:02:03.936754] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms)
00:17:32.312  [2024-12-13 19:02:03.936865] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1
00:17:32.312  [2024-12-13 19:02:03.936871] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms)
00:17:32.312  [2024-12-13 19:02:03.936876] nvme_vfio_user.c:  61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000
00:17:32.312  [2024-12-13 19:02:03.937710] nvme_vfio_user.c:  61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000
00:17:32.312  [2024-12-13 19:02:03.938723] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff
00:17:32.312  [2024-12-13 19:02:03.939714] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001
00:17:32.312  [2024-12-13 19:02:03.940708] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:17:32.312  [2024-12-13 19:02:03.940826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms)
00:17:32.312  [2024-12-13 19:02:03.941721] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1
00:17:32.312  [2024-12-13 19:02:03.941764] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready
00:17:32.312  [2024-12-13 19:02:03.941771] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms)
00:17:32.312  [2024-12-13 19:02:03.941793] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout)
00:17:32.312  [2024-12-13 19:02:03.941805] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms)
00:17:32.312  [2024-12-13 19:02:03.941833] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:17:32.312  [2024-12-13 19:02:03.941853] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:17:32.312  [2024-12-13 19:02:03.941857] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:17:32.312  [2024-12-13 19:02:03.941869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:17:32.312  [2024-12-13 19:02:03.948236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0
00:17:32.312  [2024-12-13 19:02:03.948261] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072
00:17:32.312  [2024-12-13 19:02:03.948282] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072
00:17:32.312  [2024-12-13 19:02:03.948287] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001
00:17:32.312  [2024-12-13 19:02:03.948292] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000
00:17:32.312  [2024-12-13 19:02:03.948296] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1
00:17:32.312  [2024-12-13 19:02:03.948301] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1
00:17:32.312  [2024-12-13 19:02:03.948306] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms)
00:17:32.312  [2024-12-13 19:02:03.948320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms)
00:17:32.312  [2024-12-13 19:02:03.948334] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0
00:17:32.312  [2024-12-13 19:02:03.954293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0
00:17:32.312  [2024-12-13 19:02:03.954335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:32.312  [2024-12-13 19:02:03.954345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:32.312  [2024-12-13 19:02:03.954353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:32.312  [2024-12-13 19:02:03.954361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:32.312  [2024-12-13 19:02:03.954366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms)
00:17:32.312  [2024-12-13 19:02:03.954380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms)
00:17:32.312  [2024-12-13 19:02:03.954390] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0
00:17:32.312  [2024-12-13 19:02:03.962264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0
00:17:32.312  [2024-12-13 19:02:03.962284] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms
00:17:32.312  [2024-12-13 19:02:03.962306] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms)
00:17:32.312  [2024-12-13 19:02:03.962314] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms)
00:17:32.312  [2024-12-13 19:02:03.962320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms)
00:17:32.312  [2024-12-13 19:02:03.962330] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:17:32.312  [2024-12-13 19:02:03.970260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0
00:17:32.312  [2024-12-13 19:02:03.970343] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms)
00:17:32.312  [2024-12-13 19:02:03.970360] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms)
00:17:32.312  [2024-12-13 19:02:03.970369] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096
00:17:32.312  [2024-12-13 19:02:03.970374] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000
00:17:32.312  [2024-12-13 19:02:03.970378] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:17:32.312  [2024-12-13 19:02:03.970384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0
00:17:32.312  [2024-12-13 19:02:03.977234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0
00:17:32.312  [2024-12-13 19:02:03.977272] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added
00:17:32.312  [2024-12-13 19:02:03.977288] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms)
00:17:32.312  [2024-12-13 19:02:03.977298] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms)
00:17:32.312  [2024-12-13 19:02:03.977306] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:17:32.312  [2024-12-13 19:02:03.977310] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:17:32.312  [2024-12-13 19:02:03.977314] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:17:32.312  [2024-12-13 19:02:03.977320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:17:32.312  [2024-12-13 19:02:03.985233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0
00:17:32.312  [2024-12-13 19:02:03.985278] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms)
00:17:32.312  [2024-12-13 19:02:03.985290] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms)
00:17:32.312  [2024-12-13 19:02:03.985299] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:17:32.312  [2024-12-13 19:02:03.985304] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:17:32.312  [2024-12-13 19:02:03.985307] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:17:32.312  [2024-12-13 19:02:03.985314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:17:32.312  [2024-12-13 19:02:03.992262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0
00:17:32.312  [2024-12-13 19:02:03.992284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms)
00:17:32.312  [2024-12-13 19:02:03.992309] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms)
00:17:32.312  [2024-12-13 19:02:03.992320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms)
00:17:32.312  [2024-12-13 19:02:03.992326] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms)
00:17:32.313  [2024-12-13 19:02:03.992332] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms)
00:17:32.313  [2024-12-13 19:02:03.992337] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms)
00:17:32.313  [2024-12-13 19:02:03.992342] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID
00:17:32.313  [2024-12-13 19:02:03.992346] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms)
00:17:32.313  [2024-12-13 19:02:03.992351] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout)
00:17:32.313  [2024-12-13 19:02:03.992369] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0
00:17:32.313  [2024-12-13 19:02:04.000230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0
00:17:32.313  [2024-12-13 19:02:04.000275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0
00:17:32.313  [2024-12-13 19:02:04.008234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0
00:17:32.313  [2024-12-13 19:02:04.008280] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0
00:17:32.313  [2024-12-13 19:02:04.018278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0
00:17:32.313  [2024-12-13 19:02:04.018321] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:17:32.313  [2024-12-13 19:02:04.026262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0
00:17:32.313  [2024-12-13 19:02:04.026311] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192
00:17:32.313  [2024-12-13 19:02:04.026318] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000
00:17:32.313  [2024-12-13 19:02:04.026321] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000
00:17:32.313  [2024-12-13 19:02:04.026325] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000
00:17:32.313  [2024-12-13 19:02:04.026328] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2
00:17:32.313  [2024-12-13 19:02:04.026335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000
00:17:32.313  [2024-12-13 19:02:04.026342] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512
00:17:32.313  [2024-12-13 19:02:04.026346] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000
00:17:32.313  [2024-12-13 19:02:04.026350] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:17:32.313  [2024-12-13 19:02:04.026355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0
00:17:32.313  [2024-12-13 19:02:04.026362] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512
00:17:32.313  [2024-12-13 19:02:04.026366] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:17:32.313  [2024-12-13 19:02:04.026369] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:17:32.313  [2024-12-13 19:02:04.026375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:17:32.313  [2024-12-13 19:02:04.026382] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096
00:17:32.313  [2024-12-13 19:02:04.026386] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000
00:17:32.313  [2024-12-13 19:02:04.026389] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:17:32.313  [2024-12-13 19:02:04.026395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0
00:17:32.313  [2024-12-13 19:02:04.034265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0
00:17:32.313  [2024-12-13 19:02:04.034312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0
00:17:32.313  [2024-12-13 19:02:04.034326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0
00:17:32.313  [2024-12-13 19:02:04.034333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0
00:17:32.313  =====================================================
00:17:32.313  NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2
00:17:32.313  =====================================================
00:17:32.313  Controller Capabilities/Features
00:17:32.313  ================================
00:17:32.313  Vendor ID:                             4e58
00:17:32.313  Subsystem Vendor ID:                   4e58
00:17:32.313  Serial Number:                         SPDK2
00:17:32.313  Model Number:                          SPDK bdev Controller
00:17:32.313  Firmware Version:                      25.01
00:17:32.313  Recommended Arb Burst:                 6
00:17:32.313  IEEE OUI Identifier:                   8d 6b 50
00:17:32.313  Multi-path I/O
00:17:32.313    May have multiple subsystem ports:   Yes
00:17:32.313    May have multiple controllers:       Yes
00:17:32.313    Associated with SR-IOV VF:           No
00:17:32.313  Max Data Transfer Size:                131072
00:17:32.313  Max Number of Namespaces:              32
00:17:32.313  Max Number of I/O Queues:              127
00:17:32.313  NVMe Specification Version (VS):       1.3
00:17:32.313  NVMe Specification Version (Identify): 1.3
00:17:32.313  Maximum Queue Entries:                 256
00:17:32.313  Contiguous Queues Required:            Yes
00:17:32.313  Arbitration Mechanisms Supported
00:17:32.313    Weighted Round Robin:                Not Supported
00:17:32.313    Vendor Specific:                     Not Supported
00:17:32.313  Reset Timeout:                         15000 ms
00:17:32.313  Doorbell Stride:                       4 bytes
00:17:32.313  NVM Subsystem Reset:                   Not Supported
00:17:32.313  Command Sets Supported
00:17:32.313    NVM Command Set:                     Supported
00:17:32.313  Boot Partition:                        Not Supported
00:17:32.313  Memory Page Size Minimum:              4096 bytes
00:17:32.313  Memory Page Size Maximum:              4096 bytes
00:17:32.313  Persistent Memory Region:              Not Supported
00:17:32.313  Optional Asynchronous Events Supported
00:17:32.313    Namespace Attribute Notices:         Supported
00:17:32.313    Firmware Activation Notices:         Not Supported
00:17:32.313    ANA Change Notices:                  Not Supported
00:17:32.313    PLE Aggregate Log Change Notices:    Not Supported
00:17:32.313    LBA Status Info Alert Notices:       Not Supported
00:17:32.313    EGE Aggregate Log Change Notices:    Not Supported
00:17:32.313    Normal NVM Subsystem Shutdown event: Not Supported
00:17:32.313    Zone Descriptor Change Notices:      Not Supported
00:17:32.313    Discovery Log Change Notices:        Not Supported
00:17:32.313  Controller Attributes
00:17:32.313    128-bit Host Identifier:             Supported
00:17:32.313    Non-Operational Permissive Mode:     Not Supported
00:17:32.313    NVM Sets:                            Not Supported
00:17:32.313    Read Recovery Levels:                Not Supported
00:17:32.313    Endurance Groups:                    Not Supported
00:17:32.313    Predictable Latency Mode:            Not Supported
00:17:32.313    Traffic Based Keep ALive:            Not Supported
00:17:32.313    Namespace Granularity:               Not Supported
00:17:32.313    SQ Associations:                     Not Supported
00:17:32.313    UUID List:                           Not Supported
00:17:32.313    Multi-Domain Subsystem:              Not Supported
00:17:32.313    Fixed Capacity Management:           Not Supported
00:17:32.313    Variable Capacity Management:        Not Supported
00:17:32.313    Delete Endurance Group:              Not Supported
00:17:32.313    Delete NVM Set:                      Not Supported
00:17:32.313    Extended LBA Formats Supported:      Not Supported
00:17:32.313    Flexible Data Placement Supported:   Not Supported
00:17:32.313  
00:17:32.313  Controller Memory Buffer Support
00:17:32.313  ================================
00:17:32.313  Supported:                             No
00:17:32.313  
00:17:32.313  Persistent Memory Region Support
00:17:32.313  ================================
00:17:32.313  Supported:                             No
00:17:32.313  
00:17:32.313  Admin Command Set Attributes
00:17:32.313  ============================
00:17:32.313  Security Send/Receive:                 Not Supported
00:17:32.313  Format NVM:                            Not Supported
00:17:32.313  Firmware Activate/Download:            Not Supported
00:17:32.313  Namespace Management:                  Not Supported
00:17:32.313  Device Self-Test:                      Not Supported
00:17:32.313  Directives:                            Not Supported
00:17:32.313  NVMe-MI:                               Not Supported
00:17:32.313  Virtualization Management:             Not Supported
00:17:32.313  Doorbell Buffer Config:                Not Supported
00:17:32.313  Get LBA Status Capability:             Not Supported
00:17:32.313  Command & Feature Lockdown Capability: Not Supported
00:17:32.313  Abort Command Limit:                   4
00:17:32.313  Async Event Request Limit:             4
00:17:32.313  Number of Firmware Slots:              N/A
00:17:32.313  Firmware Slot 1 Read-Only:             N/A
00:17:32.313  Firmware Activation Without Reset:     N/A
00:17:32.313  Multiple Update Detection Support:     N/A
00:17:32.313  Firmware Update Granularity:           No Information Provided
00:17:32.313  Per-Namespace SMART Log:               No
00:17:32.313  Asymmetric Namespace Access Log Page:  Not Supported
00:17:32.313  Subsystem NQN:                         nqn.2019-07.io.spdk:cnode2
00:17:32.313  Command Effects Log Page:              Supported
00:17:32.313  Get Log Page Extended Data:            Supported
00:17:32.313  Telemetry Log Pages:                   Not Supported
00:17:32.313  Persistent Event Log Pages:            Not Supported
00:17:32.313  Supported Log Pages Log Page:          May Support
00:17:32.313  Commands Supported & Effects Log Page: Not Supported
00:17:32.313  Feature Identifiers & Effects Log Page:May Support
00:17:32.313  NVMe-MI Commands & Effects Log Page:   May Support
00:17:32.313  Data Area 4 for Telemetry Log:         Not Supported
00:17:32.313  Error Log Page Entries Supported:      128
00:17:32.313  Keep Alive:                            Supported
00:17:32.313  Keep Alive Granularity:                10000 ms
00:17:32.313  
00:17:32.313  NVM Command Set Attributes
00:17:32.313  ==========================
00:17:32.313  Submission Queue Entry Size
00:17:32.313    Max:                       64
00:17:32.313    Min:                       64
00:17:32.313  Completion Queue Entry Size
00:17:32.313    Max:                       16
00:17:32.313    Min:                       16
00:17:32.313  Number of Namespaces:        32
00:17:32.313  Compare Command:             Supported
00:17:32.314  Write Uncorrectable Command: Not Supported
00:17:32.314  Dataset Management Command:  Supported
00:17:32.314  Write Zeroes Command:        Supported
00:17:32.314  Set Features Save Field:     Not Supported
00:17:32.314  Reservations:                Not Supported
00:17:32.314  Timestamp:                   Not Supported
00:17:32.314  Copy:                        Supported
00:17:32.314  Volatile Write Cache:        Present
00:17:32.314  Atomic Write Unit (Normal):  1
00:17:32.314  Atomic Write Unit (PFail):   1
00:17:32.314  Atomic Compare & Write Unit: 1
00:17:32.314  Fused Compare & Write:       Supported
00:17:32.314  Scatter-Gather List
00:17:32.314    SGL Command Set:           Supported (Dword aligned)
00:17:32.314    SGL Keyed:                 Not Supported
00:17:32.314    SGL Bit Bucket Descriptor: Not Supported
00:17:32.314    SGL Metadata Pointer:      Not Supported
00:17:32.314    Oversized SGL:             Not Supported
00:17:32.314    SGL Metadata Address:      Not Supported
00:17:32.314    SGL Offset:                Not Supported
00:17:32.314    Transport SGL Data Block:  Not Supported
00:17:32.314  Replay Protected Memory Block:  Not Supported
00:17:32.314  
00:17:32.314  Firmware Slot Information
00:17:32.314  =========================
00:17:32.314  Active slot:                 1
00:17:32.314  Slot 1 Firmware Revision:    25.01
00:17:32.314  
00:17:32.314  
00:17:32.314  Commands Supported and Effects
00:17:32.314  ==============================
00:17:32.314  Admin Commands
00:17:32.314  --------------
00:17:32.314                    Get Log Page (02h): Supported 
00:17:32.314                        Identify (06h): Supported 
00:17:32.314                           Abort (08h): Supported 
00:17:32.314                    Set Features (09h): Supported 
00:17:32.314                    Get Features (0Ah): Supported 
00:17:32.314      Asynchronous Event Request (0Ch): Supported 
00:17:32.314                      Keep Alive (18h): Supported 
00:17:32.314  I/O Commands
00:17:32.314  ------------
00:17:32.314                           Flush (00h): Supported LBA-Change 
00:17:32.314                           Write (01h): Supported LBA-Change 
00:17:32.314                            Read (02h): Supported 
00:17:32.314                         Compare (05h): Supported 
00:17:32.314                    Write Zeroes (08h): Supported LBA-Change 
00:17:32.314              Dataset Management (09h): Supported LBA-Change 
00:17:32.314                            Copy (19h): Supported LBA-Change 
00:17:32.314  
00:17:32.314  Error Log
00:17:32.314  =========
00:17:32.314  
00:17:32.314  Arbitration
00:17:32.314  ===========
00:17:32.314  Arbitration Burst:           1
00:17:32.314  
00:17:32.314  Power Management
00:17:32.314  ================
00:17:32.314  Number of Power States:          1
00:17:32.314  Current Power State:             Power State #0
00:17:32.314  Power State #0:
00:17:32.314    Max Power:                      0.00 W
00:17:32.314    Non-Operational State:         Operational
00:17:32.314    Entry Latency:                 Not Reported
00:17:32.314    Exit Latency:                  Not Reported
00:17:32.314    Relative Read Throughput:      0
00:17:32.314    Relative Read Latency:         0
00:17:32.314    Relative Write Throughput:     0
00:17:32.314    Relative Write Latency:        0
00:17:32.314    Idle Power:                     Not Reported
00:17:32.314    Active Power:                   Not Reported
00:17:32.314  Non-Operational Permissive Mode: Not Supported
00:17:32.314  
00:17:32.314  Health Information
00:17:32.314  ==================
00:17:32.314  Critical Warnings:
00:17:32.314    Available Spare Space:     OK
00:17:32.314    Temperature:               OK
00:17:32.314    Device Reliability:        OK
00:17:32.314    Read Only:                 No
00:17:32.314    Volatile Memory Backup:    OK
00:17:32.314  Current Temperature:         0 Kelvin (-273 Celsius)
00:17:32.314  Temperature Threshold:       0 Kelvin (-273 Celsius)
00:17:32.314  Available Spare:             0%
00:17:32.314  Available Sp[2024-12-13 19:02:04.034432] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0
00:17:32.314  [2024-12-13 19:02:04.042261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0
00:17:32.314  [2024-12-13 19:02:04.042327] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD
00:17:32.314  [2024-12-13 19:02:04.042340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:32.314  [2024-12-13 19:02:04.042347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:32.314  [2024-12-13 19:02:04.042354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:32.314  [2024-12-13 19:02:04.042360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:32.314  [2024-12-13 19:02:04.042437] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001
00:17:32.314  [2024-12-13 19:02:04.042452] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001
00:17:32.314  [2024-12-13 19:02:04.043430] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:17:32.314  [2024-12-13 19:02:04.043541] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us
00:17:32.314  [2024-12-13 19:02:04.043552] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms
00:17:32.314  [2024-12-13 19:02:04.044451] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9
00:17:32.314  [2024-12-13 19:02:04.044495] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds
00:17:32.314  [2024-12-13 19:02:04.044552] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl
00:17:32.314  [2024-12-13 19:02:04.045833] vfio_user_pci.c:  96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000
00:17:32.314  are Threshold:   0%
00:17:32.314  Life Percentage Used:        0%
00:17:32.314  Data Units Read:             0
00:17:32.314  Data Units Written:          0
00:17:32.314  Host Read Commands:          0
00:17:32.314  Host Write Commands:         0
00:17:32.314  Controller Busy Time:        0 minutes
00:17:32.314  Power Cycles:                0
00:17:32.314  Power On Hours:              0 hours
00:17:32.314  Unsafe Shutdowns:            0
00:17:32.314  Unrecoverable Media Errors:  0
00:17:32.314  Lifetime Error Log Entries:  0
00:17:32.314  Warning Temperature Time:    0 minutes
00:17:32.314  Critical Temperature Time:   0 minutes
00:17:32.314  
00:17:32.314  Number of Queues
00:17:32.314  ================
00:17:32.314  Number of I/O Submission Queues:      127
00:17:32.314  Number of I/O Completion Queues:      127
00:17:32.314  
00:17:32.314  Active Namespaces
00:17:32.314  =================
00:17:32.314  Namespace ID:1
00:17:32.314  Error Recovery Timeout:                Unlimited
00:17:32.314  Command Set Identifier:                NVM (00h)
00:17:32.314  Deallocate:                            Supported
00:17:32.314  Deallocated/Unwritten Error:           Not Supported
00:17:32.314  Deallocated Read Value:                Unknown
00:17:32.314  Deallocate in Write Zeroes:            Not Supported
00:17:32.314  Deallocated Guard Field:               0xFFFF
00:17:32.314  Flush:                                 Supported
00:17:32.314  Reservation:                           Supported
00:17:32.314  Namespace Sharing Capabilities:        Multiple Controllers
00:17:32.314  Size (in LBAs):                        131072 (0GiB)
00:17:32.314  Capacity (in LBAs):                    131072 (0GiB)
00:17:32.314  Utilization (in LBAs):                 131072 (0GiB)
00:17:32.314  NGUID:                                 668BAEB19D0F4170ABF7F8B52840DA12
00:17:32.314  UUID:                                  668baeb1-9d0f-4170-abf7-f8b52840da12
00:17:32.314  Thin Provisioning:                     Not Supported
00:17:32.314  Per-NS Atomic Units:                   Yes
00:17:32.314    Atomic Boundary Size (Normal):       0
00:17:32.314    Atomic Boundary Size (PFail):        0
00:17:32.314    Atomic Boundary Offset:              0
00:17:32.314  Maximum Single Source Range Length:    65535
00:17:32.314  Maximum Copy Length:                   65535
00:17:32.314  Maximum Source Range Count:            1
00:17:32.314  NGUID/EUI64 Never Reused:              No
00:17:32.314  Namespace Write Protected:             No
00:17:32.314  Number of LBA Formats:                 1
00:17:32.314  Current LBA Format:                    LBA Format #00
00:17:32.314  LBA Format #00: Data Size:   512  Metadata Size:     0
00:17:32.314  
00:17:32.314   19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2
00:17:32.573  [2024-12-13 19:02:04.363702] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:17:37.839  Initializing NVMe Controllers
00:17:37.839  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2
00:17:37.839  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1
00:17:37.839  Initialization complete. Launching workers.
00:17:37.839  ========================================================
00:17:37.839                                                                                                           Latency(us)
00:17:37.839  Device Information                                                   :       IOPS      MiB/s    Average        min        max
00:17:37.839  VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core  1:   38485.23     150.33    3325.79    1070.40   10480.82
00:17:37.839  ========================================================
00:17:37.839  Total                                                                :   38485.23     150.33    3325.79    1070.40   10480.82
00:17:37.839  
00:17:37.839  [2024-12-13 19:02:09.450683] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:17:37.839   19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2
00:17:38.097  [2024-12-13 19:02:09.795213] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:17:43.370  Initializing NVMe Controllers
00:17:43.370  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2
00:17:43.370  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1
00:17:43.370  Initialization complete. Launching workers.
00:17:43.370  ========================================================
00:17:43.370                                                                                                           Latency(us)
00:17:43.370  Device Information                                                   :       IOPS      MiB/s    Average        min        max
00:17:43.370  VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core  1:   37912.40     148.10    3376.32    1061.00   10714.88
00:17:43.370  ========================================================
00:17:43.370  Total                                                                :   37912.40     148.10    3376.32    1061.00   10714.88
00:17:43.370  
00:17:43.370  [2024-12-13 19:02:14.805031] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:17:43.370   19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE
00:17:43.370  [2024-12-13 19:02:15.085602] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:17:48.642  [2024-12-13 19:02:20.215348] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:17:48.642  Initializing NVMe Controllers
00:17:48.642  Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2
00:17:48.642  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2
00:17:48.642  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1
00:17:48.642  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2
00:17:48.642  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3
00:17:48.642  Initialization complete. Launching workers.
00:17:48.642  Starting thread on core 2
00:17:48.642  Starting thread on core 3
00:17:48.642  Starting thread on core 1
00:17:48.642   19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g
00:17:48.901  [2024-12-13 19:02:20.562447] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:17:52.189  [2024-12-13 19:02:23.616466] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:17:52.189  Initializing NVMe Controllers
00:17:52.189  Attaching to /var/run/vfio-user/domain/vfio-user2/2
00:17:52.189  Attached to /var/run/vfio-user/domain/vfio-user2/2
00:17:52.189  Associating SPDK bdev Controller (SPDK2               ) with lcore 0
00:17:52.189  Associating SPDK bdev Controller (SPDK2               ) with lcore 1
00:17:52.189  Associating SPDK bdev Controller (SPDK2               ) with lcore 2
00:17:52.189  Associating SPDK bdev Controller (SPDK2               ) with lcore 3
00:17:52.189  /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration:
00:17:52.189  /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1
00:17:52.189  Initialization complete. Launching workers.
00:17:52.189  Starting thread on core 1 with urgent priority queue
00:17:52.189  Starting thread on core 2 with urgent priority queue
00:17:52.189  Starting thread on core 3 with urgent priority queue
00:17:52.189  Starting thread on core 0 with urgent priority queue
00:17:52.189  SPDK bdev Controller (SPDK2               ) core 0:  6802.00 IO/s    14.70 secs/100000 ios
00:17:52.189  SPDK bdev Controller (SPDK2               ) core 1:  7278.33 IO/s    13.74 secs/100000 ios
00:17:52.189  SPDK bdev Controller (SPDK2               ) core 2:  7375.00 IO/s    13.56 secs/100000 ios
00:17:52.189  SPDK bdev Controller (SPDK2               ) core 3:  6933.67 IO/s    14.42 secs/100000 ios
00:17:52.189  ========================================================
00:17:52.189  
00:17:52.189   19:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2'
00:17:52.189  [2024-12-13 19:02:23.941178] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:17:52.189  Initializing NVMe Controllers
00:17:52.189  Attaching to /var/run/vfio-user/domain/vfio-user2/2
00:17:52.189  Attached to /var/run/vfio-user/domain/vfio-user2/2
00:17:52.189    Namespace ID: 1 size: 0GB
00:17:52.189  Initialization complete.
00:17:52.189  INFO: using host memory buffer for IO
00:17:52.189  Hello world!
00:17:52.189  [2024-12-13 19:02:23.952243] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:17:52.189   19:02:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2'
00:17:52.757  [2024-12-13 19:02:24.284632] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:17:53.692  Initializing NVMe Controllers
00:17:53.692  Attaching to /var/run/vfio-user/domain/vfio-user2/2
00:17:53.692  Attached to /var/run/vfio-user/domain/vfio-user2/2
00:17:53.692  Initialization complete. Launching workers.
00:17:53.692  submit (in ns)   avg, min, max =   7331.7,   3256.4, 7061739.1
00:17:53.692  complete (in ns) avg, min, max =  21661.3,   1904.5, 7046419.1
00:17:53.692  
00:17:53.692  Submit histogram
00:17:53.692  ================
00:17:53.692         Range in us     Cumulative     Count
00:17:53.692      3.244 -     3.258:    0.0067%  (        1)
00:17:53.692      3.258 -     3.273:    0.0134%  (        1)
00:17:53.692      3.273 -     3.287:    0.3218%  (       46)
00:17:53.692      3.287 -     3.302:    1.8034%  (      221)
00:17:53.692      3.302 -     3.316:    3.8214%  (      301)
00:17:53.692      3.316 -     3.331:    7.8506%  (      601)
00:17:53.692      3.331 -     3.345:   13.6833%  (      870)
00:17:53.692      3.345 -     3.360:   19.5830%  (      880)
00:17:53.692      3.360 -     3.375:   29.9142%  (     1541)
00:17:53.692      3.375 -     3.389:   38.2609%  (     1245)
00:17:53.692      3.389 -     3.404:   46.0512%  (     1162)
00:17:53.692      3.404 -     3.418:   56.1880%  (     1512)
00:17:53.692      3.418 -     3.433:   62.9659%  (     1011)
00:17:53.692      3.433 -     3.447:   66.0700%  (      463)
00:17:53.692      3.447 -     3.462:   70.4076%  (      647)
00:17:53.692      3.462 -     3.476:   72.9619%  (      381)
00:17:53.692      3.476 -     3.491:   75.1877%  (      332)
00:17:53.692      3.491 -     3.505:   77.9633%  (      414)
00:17:53.692      3.505 -     3.520:   79.6192%  (      247)
00:17:53.692      3.520 -     3.535:   80.6047%  (      147)
00:17:53.692      3.535 -     3.549:   81.5031%  (      134)
00:17:53.692      3.549 -     3.564:   82.4283%  (      138)
00:17:53.692      3.564 -     3.578:   82.9445%  (       77)
00:17:53.692      3.578 -     3.593:   83.4942%  (       82)
00:17:53.692      3.593 -     3.607:   83.9971%  (       75)
00:17:53.692      3.607 -     3.622:   84.4529%  (       68)
00:17:53.692      3.622 -     3.636:   84.8753%  (       63)
00:17:53.692      3.636 -     3.651:   85.6195%  (      111)
00:17:53.692      3.651 -     3.665:   86.4910%  (      130)
00:17:53.692      3.665 -     3.680:   87.4631%  (      145)
00:17:53.692      3.680 -     3.695:   88.9716%  (      225)
00:17:53.692      3.695 -     3.709:   90.1917%  (      182)
00:17:53.692      3.709 -     3.724:   91.4924%  (      194)
00:17:53.692      3.724 -     3.753:   93.0813%  (      237)
00:17:53.692      3.753 -     3.782:   94.3618%  (      191)
00:17:53.692      3.782 -     3.811:   95.5551%  (      178)
00:17:53.692      3.811 -     3.840:   96.3395%  (      117)
00:17:53.692      3.840 -     3.869:   97.0703%  (      109)
00:17:53.692      3.869 -     3.898:   97.4993%  (       64)
00:17:53.692      3.898 -     3.927:   97.7206%  (       33)
00:17:53.692      3.927 -     3.956:   97.8010%  (       12)
00:17:53.692      3.956 -     3.985:   97.9016%  (       15)
00:17:53.692      3.985 -     4.015:   97.9753%  (       11)
00:17:53.692      4.015 -     4.044:   98.0357%  (        9)
00:17:53.692      4.044 -     4.073:   98.0960%  (        9)
00:17:53.692      4.073 -     4.102:   98.2033%  (       16)
00:17:53.692      4.102 -     4.131:   98.2904%  (       13)
00:17:53.692      4.131 -     4.160:   98.4446%  (       23)
00:17:53.692      4.160 -     4.189:   98.5586%  (       17)
00:17:53.692      4.189 -     4.218:   98.6390%  (       12)
00:17:53.692      4.218 -     4.247:   98.7195%  (       12)
00:17:53.692      4.247 -     4.276:   98.7664%  (        7)
00:17:53.692      4.276 -     4.305:   98.7798%  (        2)
00:17:53.692      4.305 -     4.335:   98.8067%  (        4)
00:17:53.692      4.335 -     4.364:   98.8335%  (        4)
00:17:53.692      4.393 -     4.422:   98.8469%  (        2)
00:17:53.692      4.422 -     4.451:   98.8603%  (        2)
00:17:53.692      4.451 -     4.480:   98.8871%  (        4)
00:17:53.692      4.480 -     4.509:   98.9139%  (        4)
00:17:53.692      4.509 -     4.538:   98.9340%  (        3)
00:17:53.692      4.538 -     4.567:   98.9474%  (        2)
00:17:53.692      4.567 -     4.596:   98.9608%  (        2)
00:17:53.692      4.596 -     4.625:   98.9743%  (        2)
00:17:53.692      4.625 -     4.655:   98.9944%  (        3)
00:17:53.692      4.713 -     4.742:   99.0011%  (        1)
00:17:53.692      4.771 -     4.800:   99.0078%  (        1)
00:17:53.692      4.800 -     4.829:   99.0145%  (        1)
00:17:53.692      5.004 -     5.033:   99.0212%  (        1)
00:17:53.692      5.091 -     5.120:   99.0279%  (        1)
00:17:53.692      5.411 -     5.440:   99.0346%  (        1)
00:17:53.692      7.796 -     7.855:   99.0413%  (        1)
00:17:53.692      7.855 -     7.913:   99.0547%  (        2)
00:17:53.692      7.913 -     7.971:   99.0614%  (        1)
00:17:53.692      8.029 -     8.087:   99.0681%  (        1)
00:17:53.692      8.087 -     8.145:   99.0748%  (        1)
00:17:53.692      8.145 -     8.204:   99.0815%  (        1)
00:17:53.692      8.204 -     8.262:   99.1016%  (        3)
00:17:53.692      8.262 -     8.320:   99.1150%  (        2)
00:17:53.692      8.320 -     8.378:   99.1285%  (        2)
00:17:53.692      8.378 -     8.436:   99.1419%  (        2)
00:17:53.692      8.436 -     8.495:   99.1553%  (        2)
00:17:53.692      8.611 -     8.669:   99.1620%  (        1)
00:17:53.692      8.669 -     8.727:   99.1821%  (        3)
00:17:53.692      8.844 -     8.902:   99.1888%  (        1)
00:17:53.692      8.902 -     8.960:   99.1955%  (        1)
00:17:53.692      8.960 -     9.018:   99.2156%  (        3)
00:17:53.692      9.076 -     9.135:   99.2223%  (        1)
00:17:53.692      9.135 -     9.193:   99.2357%  (        2)
00:17:53.692      9.193 -     9.251:   99.2625%  (        4)
00:17:53.692      9.425 -     9.484:   99.2692%  (        1)
00:17:53.692      9.484 -     9.542:   99.2759%  (        1)
00:17:53.692      9.775 -     9.833:   99.2826%  (        1)
00:17:53.692      9.949 -    10.007:   99.3028%  (        3)
00:17:53.692     10.124 -    10.182:   99.3095%  (        1)
00:17:53.692     10.705 -    10.764:   99.3162%  (        1)
00:17:53.692     10.880 -    10.938:   99.3229%  (        1)
00:17:53.692     11.113 -    11.171:   99.3296%  (        1)
00:17:53.692     11.462 -    11.520:   99.3363%  (        1)
00:17:53.692     13.556 -    13.615:   99.3430%  (        1)
00:17:53.692     15.011 -    15.127:   99.3497%  (        1)
00:17:53.692     17.687 -    17.804:   99.3698%  (        3)
00:17:53.692     17.804 -    17.920:   99.4033%  (        5)
00:17:53.692     17.920 -    18.036:   99.4570%  (        8)
00:17:53.692     18.036 -    18.153:   99.4771%  (        3)
00:17:53.692     18.153 -    18.269:   99.5039%  (        4)
00:17:53.692     18.269 -    18.385:   99.5508%  (        7)
00:17:53.692     18.385 -    18.502:   99.5642%  (        2)
00:17:53.692     18.502 -    18.618:   99.5776%  (        2)
00:17:53.692     18.618 -    18.735:   99.5843%  (        1)
00:17:53.692     18.735 -    18.851:   99.6045%  (        3)
00:17:53.692     18.851 -    18.967:   99.6246%  (        3)
00:17:53.692     18.967 -    19.084:   99.6648%  (        6)
00:17:53.692     19.084 -    19.200:   99.6916%  (        4)
00:17:53.692     19.200 -    19.316:   99.7184%  (        4)
00:17:53.692     19.316 -    19.433:   99.7586%  (        6)
00:17:53.692     19.433 -    19.549:   99.7855%  (        4)
00:17:53.692     19.549 -    19.665:   99.8190%  (        5)
00:17:53.692     19.665 -    19.782:   99.8257%  (        1)
00:17:53.692     19.782 -    19.898:   99.8391%  (        2)
00:17:53.692     19.898 -    20.015:   99.8726%  (        5)
00:17:53.692     20.131 -    20.247:   99.8793%  (        1)
00:17:53.692     20.247 -    20.364:   99.8860%  (        1)
00:17:53.692     20.480 -    20.596:   99.8927%  (        1)
00:17:53.692     27.811 -    27.927:   99.8994%  (        1)
00:17:53.692     32.349 -    32.582:   99.9061%  (        1)
00:17:53.692   3038.487 -  3053.382:   99.9195%  (        2)
00:17:53.692   3068.276 -  3083.171:   99.9263%  (        1)
00:17:53.692   3961.949 -  3991.738:   99.9464%  (        3)
00:17:53.692   3991.738 -  4021.527:   99.9866%  (        6)
00:17:53.692   4021.527 -  4051.316:   99.9933%  (        1)
00:17:53.692   7060.015 -  7089.804:  100.0000%  (        1)
00:17:53.692  
00:17:53.692  Complete histogram
00:17:53.692  ==================
00:17:53.692         Range in us     Cumulative     Count
00:17:53.692      1.891 -     1.905:    0.0201%  (        3)
00:17:53.692      1.905 -     1.920:    2.8091%  (      416)
00:17:53.692      1.920 -     1.935:   49.7251%  (     6998)
00:17:53.693      1.935 -     1.949:   60.6463%  (     1629)
00:17:53.693      1.949 -     1.964:   61.0351%  (       58)
00:17:53.693      1.964 -     1.978:   62.9995%  (      293)
00:17:53.693      1.978 -     1.993:   82.4819%  (     2906)
00:17:53.693      1.993 -     2.007:   86.0083%  (      526)
00:17:53.693      2.007 -     2.022:   86.3301%  (       48)
00:17:53.693      2.022 -     2.036:   87.0609%  (      109)
00:17:53.693      2.036 -     2.051:   90.6409%  (      534)
00:17:53.693      2.051 -     2.065:   92.7393%  (      313)
00:17:53.693      2.065 -     2.080:   92.9874%  (       37)
00:17:53.693      2.080 -     2.095:   93.1550%  (       25)
00:17:53.693      2.095 -     2.109:   93.4366%  (       42)
00:17:53.693      2.109 -     2.124:   95.2065%  (      264)
00:17:53.693      2.124 -     2.138:   95.5216%  (       47)
00:17:53.693      2.138 -     2.153:   95.5953%  (       11)
00:17:53.693      2.153 -     2.167:   95.6959%  (       15)
00:17:53.693      2.167 -     2.182:   95.8300%  (       20)
00:17:53.693      2.182 -     2.196:   96.6479%  (      122)
00:17:53.693      2.196 -     2.211:   96.8222%  (       26)
00:17:53.693      2.211 -     2.225:   96.8423%  (        3)
00:17:53.693      2.225 -     2.240:   96.8691%  (        4)
00:17:53.693      2.240 -     2.255:   96.9697%  (       15)
00:17:53.693      2.255 -     2.269:   98.0021%  (      154)
00:17:53.693      2.269 -     2.284:   98.4983%  (       74)
00:17:53.693      2.284 -     2.298:   98.5251%  (        4)
00:17:53.693      2.298 -     2.313:   98.5385%  (        2)
00:17:53.693      2.313 -     2.327:   98.6189%  (       12)
00:17:53.693      2.327 -     2.342:   98.6726%  (        8)
00:17:53.693      2.342 -     2.356:   98.7195%  (        7)
00:17:53.693      2.356 -     2.371:   98.7329%  (        2)
00:17:53.693      2.371 -     2.3[2024-12-13 19:02:25.373815] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:17:53.693  85:   98.7396%  (        1)
00:17:53.693      2.385 -     2.400:   98.7463%  (        1)
00:17:53.693      2.415 -     2.429:   98.7530%  (        1)
00:17:53.693      2.429 -     2.444:   98.7597%  (        1)
00:17:53.693      2.487 -     2.502:   98.7664%  (        1)
00:17:53.693      2.531 -     2.545:   98.7731%  (        1)
00:17:53.693      2.604 -     2.618:   98.7798%  (        1)
00:17:53.693      3.200 -     3.215:   98.7865%  (        1)
00:17:53.693      3.375 -     3.389:   98.7932%  (        1)
00:17:53.693      3.404 -     3.418:   98.7999%  (        1)
00:17:53.693      3.447 -     3.462:   98.8335%  (        5)
00:17:53.693      3.462 -     3.476:   98.8402%  (        1)
00:17:53.693      3.476 -     3.491:   98.8737%  (        5)
00:17:53.693      3.491 -     3.505:   98.8804%  (        1)
00:17:53.693      3.505 -     3.520:   98.8938%  (        2)
00:17:53.693      3.520 -     3.535:   98.9005%  (        1)
00:17:53.693      3.535 -     3.549:   98.9072%  (        1)
00:17:53.693      3.578 -     3.593:   98.9139%  (        1)
00:17:53.693      3.593 -     3.607:   98.9273%  (        2)
00:17:53.693      3.607 -     3.622:   98.9474%  (        3)
00:17:53.693      3.636 -     3.651:   98.9608%  (        2)
00:17:53.693      3.651 -     3.665:   98.9676%  (        1)
00:17:53.693      3.665 -     3.680:   98.9743%  (        1)
00:17:53.693      3.709 -     3.724:   98.9810%  (        1)
00:17:53.693      3.724 -     3.753:   98.9944%  (        2)
00:17:53.693      3.753 -     3.782:   99.0078%  (        2)
00:17:53.693      3.811 -     3.840:   99.0145%  (        1)
00:17:53.693      3.840 -     3.869:   99.0212%  (        1)
00:17:53.693      4.335 -     4.364:   99.0279%  (        1)
00:17:53.693      4.364 -     4.393:   99.0346%  (        1)
00:17:53.693      4.393 -     4.422:   99.0413%  (        1)
00:17:53.693      6.342 -     6.371:   99.0480%  (        1)
00:17:53.693      6.371 -     6.400:   99.0614%  (        2)
00:17:53.693      6.429 -     6.458:   99.0681%  (        1)
00:17:53.693      6.604 -     6.633:   99.0815%  (        2)
00:17:53.693      6.691 -     6.720:   99.1016%  (        3)
00:17:53.693      6.720 -     6.749:   99.1083%  (        1)
00:17:53.693      6.836 -     6.865:   99.1150%  (        1)
00:17:53.693      7.011 -     7.040:   99.1217%  (        1)
00:17:53.693      7.098 -     7.127:   99.1352%  (        2)
00:17:53.693      7.244 -     7.273:   99.1486%  (        2)
00:17:53.693      7.302 -     7.331:   99.1553%  (        1)
00:17:53.693      7.360 -     7.389:   99.1620%  (        1)
00:17:53.693      7.622 -     7.680:   99.1754%  (        2)
00:17:53.693      7.971 -     8.029:   99.1955%  (        3)
00:17:53.693      8.262 -     8.320:   99.2022%  (        1)
00:17:53.693      8.844 -     8.902:   99.2089%  (        1)
00:17:53.693      9.193 -     9.251:   99.2156%  (        1)
00:17:53.693     16.058 -    16.175:   99.2223%  (        1)
00:17:53.693     16.175 -    16.291:   99.2290%  (        1)
00:17:53.693     16.407 -    16.524:   99.2826%  (        8)
00:17:53.693     16.524 -    16.640:   99.3229%  (        6)
00:17:53.693     16.640 -    16.756:   99.3296%  (        1)
00:17:53.693     16.756 -    16.873:   99.3363%  (        1)
00:17:53.693     16.873 -    16.989:   99.3698%  (        5)
00:17:53.693     17.105 -    17.222:   99.3899%  (        3)
00:17:53.693     17.222 -    17.338:   99.4033%  (        2)
00:17:53.693     17.338 -    17.455:   99.4167%  (        2)
00:17:53.693     17.455 -    17.571:   99.4234%  (        1)
00:17:53.693     17.571 -    17.687:   99.4301%  (        1)
00:17:53.693     17.687 -    17.804:   99.4570%  (        4)
00:17:53.693     17.804 -    17.920:   99.4771%  (        3)
00:17:53.693     17.920 -    18.036:   99.4838%  (        1)
00:17:53.693     18.036 -    18.153:   99.4972%  (        2)
00:17:53.693     18.735 -    18.851:   99.5039%  (        1)
00:17:53.693   2040.553 -  2055.447:   99.5106%  (        1)
00:17:53.693   3023.593 -  3038.487:   99.5173%  (        1)
00:17:53.693   3038.487 -  3053.382:   99.5642%  (        7)
00:17:53.693   3068.276 -  3083.171:   99.5709%  (        1)
00:17:53.693   3932.160 -  3961.949:   99.5776%  (        1)
00:17:53.693   3961.949 -  3991.738:   99.6447%  (       10)
00:17:53.693   3991.738 -  4021.527:   99.8860%  (       36)
00:17:53.693   4021.527 -  4051.316:   99.9732%  (       13)
00:17:53.693   4051.316 -  4081.105:   99.9799%  (        1)
00:17:53.693   5957.818 -  5987.607:   99.9866%  (        1)
00:17:53.693   6047.185 -  6076.975:   99.9933%  (        1)
00:17:53.693   7030.225 -  7060.015:  100.0000%  (        1)
00:17:53.693  
00:17:53.693   19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2
00:17:53.693   19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2
00:17:53.693   19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2
00:17:53.693   19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4
00:17:53.693   19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems
00:17:53.951  [
00:17:53.951    {
00:17:53.951      "allow_any_host": true,
00:17:53.951      "hosts": [],
00:17:53.951      "listen_addresses": [],
00:17:53.951      "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:17:53.951      "subtype": "Discovery"
00:17:53.951    },
00:17:53.951    {
00:17:53.951      "allow_any_host": true,
00:17:53.951      "hosts": [],
00:17:53.951      "listen_addresses": [
00:17:53.951        {
00:17:53.951          "adrfam": "IPv4",
00:17:53.951          "traddr": "/var/run/vfio-user/domain/vfio-user1/1",
00:17:53.951          "trsvcid": "0",
00:17:53.951          "trtype": "VFIOUSER"
00:17:53.951        }
00:17:53.951      ],
00:17:53.951      "max_cntlid": 65519,
00:17:53.951      "max_namespaces": 32,
00:17:53.951      "min_cntlid": 1,
00:17:53.951      "model_number": "SPDK bdev Controller",
00:17:53.951      "namespaces": [
00:17:53.951        {
00:17:53.951          "bdev_name": "Malloc1",
00:17:53.951          "name": "Malloc1",
00:17:53.951          "nguid": "3DE11ECF76E84CD18F72BB0C5C1F2E40",
00:17:53.951          "nsid": 1,
00:17:53.951          "uuid": "3de11ecf-76e8-4cd1-8f72-bb0c5c1f2e40"
00:17:53.951        },
00:17:53.951        {
00:17:53.951          "bdev_name": "Malloc3",
00:17:53.952          "name": "Malloc3",
00:17:53.952          "nguid": "41989928490841C58258B9852ADE069D",
00:17:53.952          "nsid": 2,
00:17:53.952          "uuid": "41989928-4908-41c5-8258-b9852ade069d"
00:17:53.952        }
00:17:53.952      ],
00:17:53.952      "nqn": "nqn.2019-07.io.spdk:cnode1",
00:17:53.952      "serial_number": "SPDK1",
00:17:53.952      "subtype": "NVMe"
00:17:53.952    },
00:17:53.952    {
00:17:53.952      "allow_any_host": true,
00:17:53.952      "hosts": [],
00:17:53.952      "listen_addresses": [
00:17:53.952        {
00:17:53.952          "adrfam": "IPv4",
00:17:53.952          "traddr": "/var/run/vfio-user/domain/vfio-user2/2",
00:17:53.952          "trsvcid": "0",
00:17:53.952          "trtype": "VFIOUSER"
00:17:53.952        }
00:17:53.952      ],
00:17:53.952      "max_cntlid": 65519,
00:17:53.952      "max_namespaces": 32,
00:17:53.952      "min_cntlid": 1,
00:17:53.952      "model_number": "SPDK bdev Controller",
00:17:53.952      "namespaces": [
00:17:53.952        {
00:17:53.952          "bdev_name": "Malloc2",
00:17:53.952          "name": "Malloc2",
00:17:53.952          "nguid": "668BAEB19D0F4170ABF7F8B52840DA12",
00:17:53.952          "nsid": 1,
00:17:53.952          "uuid": "668baeb1-9d0f-4170-abf7-f8b52840da12"
00:17:53.952        }
00:17:53.952      ],
00:17:53.952      "nqn": "nqn.2019-07.io.spdk:cnode2",
00:17:53.952      "serial_number": "SPDK2",
00:17:53.952      "subtype": "NVMe"
00:17:53.952    }
00:17:53.952  ]
00:17:53.952   19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file
00:17:53.952   19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=94956
00:17:53.952   19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file
00:17:53.952   19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0
00:17:53.952   19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:17:53.952   19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r '		trtype:VFIOUSER 		traddr:/var/run/vfio-user/domain/vfio-user2/2 		subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file
00:17:53.952   19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']'
00:17:53.952   19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1
00:17:53.952   19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1
00:17:54.210   19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:17:54.210   19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']'
00:17:54.210   19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2
00:17:54.210   19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1
00:17:54.210   19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:17:54.210   19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']'
00:17:54.210   19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=3
00:17:54.210   19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1
00:17:54.210  [2024-12-13 19:02:25.933706] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:17:54.210   19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:17:54.210   19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:17:54.210   19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0
00:17:54.210   19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file
00:17:54.210   19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4
00:17:54.776  Malloc4
00:17:54.776   19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2
00:17:55.034  [2024-12-13 19:02:26.616379] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:17:55.034   19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems
00:17:55.034  Asynchronous Event Request test
00:17:55.034  Attaching to /var/run/vfio-user/domain/vfio-user2/2
00:17:55.034  Attached to /var/run/vfio-user/domain/vfio-user2/2
00:17:55.034  Registering asynchronous event callbacks...
00:17:55.034  Starting namespace attribute notice tests for all controllers...
00:17:55.034  /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00
00:17:55.034  aer_cb - Changed Namespace
00:17:55.034  Cleaning up...
00:17:55.292  [
00:17:55.292    {
00:17:55.292      "allow_any_host": true,
00:17:55.292      "hosts": [],
00:17:55.292      "listen_addresses": [],
00:17:55.292      "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:17:55.292      "subtype": "Discovery"
00:17:55.292    },
00:17:55.292    {
00:17:55.292      "allow_any_host": true,
00:17:55.292      "hosts": [],
00:17:55.292      "listen_addresses": [
00:17:55.292        {
00:17:55.292          "adrfam": "IPv4",
00:17:55.292          "traddr": "/var/run/vfio-user/domain/vfio-user1/1",
00:17:55.292          "trsvcid": "0",
00:17:55.292          "trtype": "VFIOUSER"
00:17:55.292        }
00:17:55.292      ],
00:17:55.292      "max_cntlid": 65519,
00:17:55.292      "max_namespaces": 32,
00:17:55.292      "min_cntlid": 1,
00:17:55.292      "model_number": "SPDK bdev Controller",
00:17:55.292      "namespaces": [
00:17:55.292        {
00:17:55.292          "bdev_name": "Malloc1",
00:17:55.292          "name": "Malloc1",
00:17:55.292          "nguid": "3DE11ECF76E84CD18F72BB0C5C1F2E40",
00:17:55.292          "nsid": 1,
00:17:55.292          "uuid": "3de11ecf-76e8-4cd1-8f72-bb0c5c1f2e40"
00:17:55.292        },
00:17:55.292        {
00:17:55.292          "bdev_name": "Malloc3",
00:17:55.292          "name": "Malloc3",
00:17:55.292          "nguid": "41989928490841C58258B9852ADE069D",
00:17:55.292          "nsid": 2,
00:17:55.292          "uuid": "41989928-4908-41c5-8258-b9852ade069d"
00:17:55.293        }
00:17:55.293      ],
00:17:55.293      "nqn": "nqn.2019-07.io.spdk:cnode1",
00:17:55.293      "serial_number": "SPDK1",
00:17:55.293      "subtype": "NVMe"
00:17:55.293    },
00:17:55.293    {
00:17:55.293      "allow_any_host": true,
00:17:55.293      "hosts": [],
00:17:55.293      "listen_addresses": [
00:17:55.293        {
00:17:55.293          "adrfam": "IPv4",
00:17:55.293          "traddr": "/var/run/vfio-user/domain/vfio-user2/2",
00:17:55.293          "trsvcid": "0",
00:17:55.293          "trtype": "VFIOUSER"
00:17:55.293        }
00:17:55.293      ],
00:17:55.293      "max_cntlid": 65519,
00:17:55.293      "max_namespaces": 32,
00:17:55.293      "min_cntlid": 1,
00:17:55.293      "model_number": "SPDK bdev Controller",
00:17:55.293      "namespaces": [
00:17:55.293        {
00:17:55.293          "bdev_name": "Malloc2",
00:17:55.293          "name": "Malloc2",
00:17:55.293          "nguid": "668BAEB19D0F4170ABF7F8B52840DA12",
00:17:55.293          "nsid": 1,
00:17:55.293          "uuid": "668baeb1-9d0f-4170-abf7-f8b52840da12"
00:17:55.293        },
00:17:55.293        {
00:17:55.293          "bdev_name": "Malloc4",
00:17:55.293          "name": "Malloc4",
00:17:55.293          "nguid": "994ADFADB0EB44848924C83F585B0F11",
00:17:55.293          "nsid": 2,
00:17:55.293          "uuid": "994adfad-b0eb-4484-8924-c83f585b0f11"
00:17:55.293        }
00:17:55.293      ],
00:17:55.293      "nqn": "nqn.2019-07.io.spdk:cnode2",
00:17:55.293      "serial_number": "SPDK2",
00:17:55.293      "subtype": "NVMe"
00:17:55.293    }
00:17:55.293  ]
00:17:55.293   19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 94956
00:17:55.293   19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user
00:17:55.293   19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 94285
00:17:55.293   19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 94285 ']'
00:17:55.293   19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 94285
00:17:55.293    19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname
00:17:55.293   19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:55.293    19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94285
00:17:55.293   19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:17:55.293   19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:17:55.293  killing process with pid 94285
00:17:55.293   19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94285'
00:17:55.293   19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 94285
00:17:55.293   19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 94285
00:17:55.551   19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user
00:17:55.551   19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT
00:17:55.551   19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I'
00:17:55.551   19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode
00:17:55.551   19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I'
00:17:55.551   19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=95000
00:17:55.551  Process pid: 95000
00:17:55.551   19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode
00:17:55.551   19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 95000'
00:17:55.551   19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT
00:17:55.551   19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 95000
00:17:55.551   19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 95000 ']'
00:17:55.551   19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:55.551   19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:55.551  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:55.551   19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:55.551   19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:55.551   19:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x
00:17:55.551  [2024-12-13 19:02:27.301687] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:17:55.551  [2024-12-13 19:02:27.302705] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:17:55.551  [2024-12-13 19:02:27.302800] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:17:55.810  [2024-12-13 19:02:27.443353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:17:55.810  [2024-12-13 19:02:27.485060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:17:55.810  [2024-12-13 19:02:27.485102] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:17:55.810  [2024-12-13 19:02:27.485112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:17:55.810  [2024-12-13 19:02:27.485119] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:17:55.810  [2024-12-13 19:02:27.485125] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:17:55.810  [2024-12-13 19:02:27.486322] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:17:55.810  [2024-12-13 19:02:27.486392] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:17:55.810  [2024-12-13 19:02:27.486570] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:17:55.810  [2024-12-13 19:02:27.486447] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:17:55.810  [2024-12-13 19:02:27.570059] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:17:55.810  [2024-12-13 19:02:27.570522] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:17:55.810  [2024-12-13 19:02:27.571025] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:17:55.810  [2024-12-13 19:02:27.571386] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode.
00:17:55.810  [2024-12-13 19:02:27.571736] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:17:56.804   19:02:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:17:56.804   19:02:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0
00:17:56.804   19:02:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1
00:17:57.740   19:02:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I
00:17:57.740   19:02:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user
00:17:57.740    19:02:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2
00:17:57.740   19:02:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES)
00:17:57.740   19:02:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1
00:17:57.740   19:02:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1
00:17:58.308  Malloc1
00:17:58.308   19:02:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1
00:17:58.308   19:02:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1
00:17:58.875   19:02:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0
00:17:58.875   19:02:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES)
00:17:58.875   19:02:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2
00:17:58.875   19:02:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2
00:17:59.133  Malloc2
00:17:59.392   19:02:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2
00:17:59.392   19:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2
00:17:59.959   19:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0
00:17:59.959   19:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user
00:17:59.959   19:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 95000
00:17:59.959   19:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 95000 ']'
00:17:59.959   19:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 95000
00:17:59.959    19:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname
00:17:59.959   19:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:59.959    19:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95000
00:17:59.959   19:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:17:59.959   19:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:17:59.959  killing process with pid 95000
00:17:59.959   19:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95000'
00:17:59.959   19:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 95000
00:17:59.959   19:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 95000
00:18:00.218   19:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user
00:18:00.218   19:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT
00:18:00.218  
00:18:00.218  real	0m55.162s
00:18:00.218  user	3m30.532s
00:18:00.218  sys	0m3.445s
00:18:00.218   19:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:00.218   19:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x
00:18:00.218  ************************************
00:18:00.218  END TEST nvmf_vfio_user
00:18:00.218  ************************************
00:18:00.218   19:02:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp
00:18:00.218   19:02:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:18:00.218   19:02:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:00.218   19:02:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:18:00.478  ************************************
00:18:00.478  START TEST nvmf_vfio_user_nvme_compliance
00:18:00.478  ************************************
00:18:00.478   19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp
00:18:00.478  * Looking for test storage...
00:18:00.478  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/compliance
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:18:00.478     19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version
00:18:00.478     19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-:
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-:
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<'
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 ))
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:18:00.478     19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1
00:18:00.478     19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1
00:18:00.478     19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:18:00.478     19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1
00:18:00.478     19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2
00:18:00.478     19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2
00:18:00.478     19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:18:00.478     19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:18:00.478  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:00.478  		--rc genhtml_branch_coverage=1
00:18:00.478  		--rc genhtml_function_coverage=1
00:18:00.478  		--rc genhtml_legend=1
00:18:00.478  		--rc geninfo_all_blocks=1
00:18:00.478  		--rc geninfo_unexecuted_blocks=1
00:18:00.478  		
00:18:00.478  		'
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:18:00.478  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:00.478  		--rc genhtml_branch_coverage=1
00:18:00.478  		--rc genhtml_function_coverage=1
00:18:00.478  		--rc genhtml_legend=1
00:18:00.478  		--rc geninfo_all_blocks=1
00:18:00.478  		--rc geninfo_unexecuted_blocks=1
00:18:00.478  		
00:18:00.478  		'
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:18:00.478  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:00.478  		--rc genhtml_branch_coverage=1
00:18:00.478  		--rc genhtml_function_coverage=1
00:18:00.478  		--rc genhtml_legend=1
00:18:00.478  		--rc geninfo_all_blocks=1
00:18:00.478  		--rc geninfo_unexecuted_blocks=1
00:18:00.478  		
00:18:00.478  		'
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:18:00.478  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:00.478  		--rc genhtml_branch_coverage=1
00:18:00.478  		--rc genhtml_function_coverage=1
00:18:00.478  		--rc genhtml_legend=1
00:18:00.478  		--rc geninfo_all_blocks=1
00:18:00.478  		--rc geninfo_unexecuted_blocks=1
00:18:00.478  		
00:18:00.478  		'
00:18:00.478   19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:18:00.478     19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:18:00.478     19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:18:00.478    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:18:00.478     19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob
00:18:00.478     19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:18:00.478     19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:18:00.478     19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:18:00.479      19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:00.479      19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:00.479      19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:00.479      19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH
00:18:00.479      19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:00.479    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0
00:18:00.479    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:18:00.479    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:18:00.479    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:18:00.479    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:18:00.479    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:18:00.479    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:18:00.479  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:18:00.479    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:18:00.479    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:18:00.479    19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0
00:18:00.479   19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64
00:18:00.479   19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:18:00.479   19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER
00:18:00.479   19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER
00:18:00.479   19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user
00:18:00.479   19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=95211
00:18:00.479  Process pid: 95211
00:18:00.479   19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 95211'
00:18:00.479   19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT
00:18:00.479   19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 95211
00:18:00.479   19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 95211 ']'
00:18:00.479   19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7
00:18:00.479   19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:00.479   19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:00.479  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:00.479   19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:00.479   19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:00.479   19:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x
00:18:00.738  [2024-12-13 19:02:32.332574] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:18:00.738  [2024-12-13 19:02:32.332707] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:18:00.738  [2024-12-13 19:02:32.481313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:18:00.738  [2024-12-13 19:02:32.516164] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:18:00.738  [2024-12-13 19:02:32.516260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:18:00.738  [2024-12-13 19:02:32.516272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:18:00.738  [2024-12-13 19:02:32.516280] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:18:00.738  [2024-12-13 19:02:32.516286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:18:00.738  [2024-12-13 19:02:32.517474] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:18:00.738  [2024-12-13 19:02:32.517573] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:18:00.738  [2024-12-13 19:02:32.517567] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:18:01.673   19:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:01.673   19:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0
00:18:01.673   19:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x
00:18:02.609  malloc0
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:02.609   19:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0'
00:18:02.868  
00:18:02.868  
00:18:02.868       CUnit - A unit testing framework for C - Version 2.1-3
00:18:02.868       http://cunit.sourceforge.net/
00:18:02.868  
00:18:02.868  
00:18:02.868  Suite: nvme_compliance
00:18:02.868    Test: admin_identify_ctrlr_verify_dptr ...[2024-12-13 19:02:34.619639] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:18:02.868  [2024-12-13 19:02:34.621122] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining
00:18:02.868  [2024-12-13 19:02:34.621166] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed
00:18:02.868  [2024-12-13 19:02:34.621175] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed
00:18:02.868  [2024-12-13 19:02:34.622653] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:18:02.868  passed
00:18:03.127    Test: admin_identify_ctrlr_verify_fused ...[2024-12-13 19:02:34.707047] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:18:03.127  [2024-12-13 19:02:34.710059] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:18:03.127  passed
00:18:03.127    Test: admin_identify_ns ...[2024-12-13 19:02:34.798513] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:18:03.127  [2024-12-13 19:02:34.856240] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0
00:18:03.127  [2024-12-13 19:02:34.863313] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295
00:18:03.127  [2024-12-13 19:02:34.883405] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:18:03.127  passed
00:18:03.386    Test: admin_get_features_mandatory_features ...[2024-12-13 19:02:34.972670] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:18:03.386  [2024-12-13 19:02:34.975684] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:18:03.386  passed
00:18:03.386    Test: admin_get_features_optional_features ...[2024-12-13 19:02:35.057003] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:18:03.386  [2024-12-13 19:02:35.062022] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:18:03.386  passed
00:18:03.386    Test: admin_set_features_number_of_queues ...[2024-12-13 19:02:35.141955] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:18:03.643  [2024-12-13 19:02:35.247380] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:18:03.643  passed
00:18:03.643    Test: admin_get_log_page_mandatory_logs ...[2024-12-13 19:02:35.334106] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:18:03.643  [2024-12-13 19:02:35.337121] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:18:03.643  passed
00:18:03.643    Test: admin_get_log_page_with_lpo ...[2024-12-13 19:02:35.419329] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:18:03.901  [2024-12-13 19:02:35.497271] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512)
00:18:03.901  [2024-12-13 19:02:35.509315] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:18:03.901  passed
00:18:03.901    Test: fabric_property_get ...[2024-12-13 19:02:35.590922] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:18:03.901  [2024-12-13 19:02:35.592201] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed
00:18:03.901  [2024-12-13 19:02:35.593938] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:18:03.901  passed
00:18:03.901    Test: admin_delete_io_sq_use_admin_qid ...[2024-12-13 19:02:35.678330] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:18:03.901  [2024-12-13 19:02:35.679640] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist
00:18:03.901  [2024-12-13 19:02:35.681341] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:18:03.901  passed
00:18:04.160    Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-13 19:02:35.766326] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:18:04.160  [2024-12-13 19:02:35.850261] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist
00:18:04.160  [2024-12-13 19:02:35.866265] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist
00:18:04.160  [2024-12-13 19:02:35.871356] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:18:04.160  passed
00:18:04.160    Test: admin_delete_io_cq_use_admin_qid ...[2024-12-13 19:02:35.955248] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:18:04.160  [2024-12-13 19:02:35.956597] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist
00:18:04.160  [2024-12-13 19:02:35.958304] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:18:04.418  passed
00:18:04.418    Test: admin_delete_io_cq_delete_cq_first ...[2024-12-13 19:02:36.041919] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:18:04.418  [2024-12-13 19:02:36.114293] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first
00:18:04.418  [2024-12-13 19:02:36.138268] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist
00:18:04.418  [2024-12-13 19:02:36.143364] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:18:04.418  passed
00:18:04.418    Test: admin_create_io_cq_verify_iv_pc ...[2024-12-13 19:02:36.221860] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:18:04.418  [2024-12-13 19:02:36.223116] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big
00:18:04.418  [2024-12-13 19:02:36.223172] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported
00:18:04.418  [2024-12-13 19:02:36.224869] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:18:04.676  passed
00:18:04.676    Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-13 19:02:36.310600] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:18:04.676  [2024-12-13 19:02:36.406275] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1
00:18:04.676  [2024-12-13 19:02:36.414238] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257
00:18:04.676  [2024-12-13 19:02:36.422233] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0
00:18:04.676  [2024-12-13 19:02:36.430266] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128
00:18:04.676  [2024-12-13 19:02:36.462326] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:18:04.676  passed
00:18:04.935    Test: admin_create_io_sq_verify_pc ...[2024-12-13 19:02:36.539962] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:18:04.935  [2024-12-13 19:02:36.555290] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported
00:18:04.935  [2024-12-13 19:02:36.571286] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:18:04.935  passed
00:18:04.935    Test: admin_create_io_qp_max_qps ...[2024-12-13 19:02:36.653677] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:18:06.310  [2024-12-13 19:02:37.759239] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs
00:18:06.310  [2024-12-13 19:02:38.129577] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:18:06.569  passed
00:18:06.569    Test: admin_create_io_sq_shared_cq ...[2024-12-13 19:02:38.209334] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:18:06.569  [2024-12-13 19:02:38.337268] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first
00:18:06.569  [2024-12-13 19:02:38.373312] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:18:06.827  passed
00:18:06.827  
00:18:06.827  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:18:06.827                suites      1      1    n/a      0        0
00:18:06.827                 tests     18     18     18      0        0
00:18:06.827               asserts    360    360    360      0      n/a
00:18:06.827  
00:18:06.827  Elapsed time =    1.549 seconds
00:18:06.827   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 95211
00:18:06.827   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 95211 ']'
00:18:06.827   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 95211
00:18:06.827    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname
00:18:06.827   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:06.827    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95211
00:18:06.827   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:18:06.827   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:18:06.827   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95211'
00:18:06.827  killing process with pid 95211
00:18:06.827   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 95211
00:18:06.827   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 95211
00:18:07.086   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user
00:18:07.086   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT
00:18:07.086  
00:18:07.086  real	0m6.608s
00:18:07.086  user	0m18.644s
00:18:07.086  sys	0m0.512s
00:18:07.086   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:07.086   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x
00:18:07.086  ************************************
00:18:07.086  END TEST nvmf_vfio_user_nvme_compliance
00:18:07.086  ************************************
00:18:07.086   19:02:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp
00:18:07.086   19:02:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:18:07.086   19:02:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:07.086   19:02:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:18:07.086  ************************************
00:18:07.086  START TEST nvmf_vfio_user_fuzz
00:18:07.086  ************************************
00:18:07.086   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp
00:18:07.086  * Looking for test storage...
00:18:07.086  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:18:07.086    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:18:07.086     19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:18:07.086     19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version
00:18:07.086    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:18:07.086    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:18:07.086    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l
00:18:07.086    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l
00:18:07.086    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-:
00:18:07.086    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1
00:18:07.086    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-:
00:18:07.086    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2
00:18:07.086    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<'
00:18:07.086    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2
00:18:07.086    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1
00:18:07.086    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:18:07.086    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 ))
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:18:07.087     19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1
00:18:07.087     19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1
00:18:07.087     19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:18:07.087     19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1
00:18:07.087     19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2
00:18:07.087     19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2
00:18:07.087     19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:18:07.087     19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:18:07.087  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:07.087  		--rc genhtml_branch_coverage=1
00:18:07.087  		--rc genhtml_function_coverage=1
00:18:07.087  		--rc genhtml_legend=1
00:18:07.087  		--rc geninfo_all_blocks=1
00:18:07.087  		--rc geninfo_unexecuted_blocks=1
00:18:07.087  		
00:18:07.087  		'
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:18:07.087  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:07.087  		--rc genhtml_branch_coverage=1
00:18:07.087  		--rc genhtml_function_coverage=1
00:18:07.087  		--rc genhtml_legend=1
00:18:07.087  		--rc geninfo_all_blocks=1
00:18:07.087  		--rc geninfo_unexecuted_blocks=1
00:18:07.087  		
00:18:07.087  		'
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:18:07.087  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:07.087  		--rc genhtml_branch_coverage=1
00:18:07.087  		--rc genhtml_function_coverage=1
00:18:07.087  		--rc genhtml_legend=1
00:18:07.087  		--rc geninfo_all_blocks=1
00:18:07.087  		--rc geninfo_unexecuted_blocks=1
00:18:07.087  		
00:18:07.087  		'
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:18:07.087  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:07.087  		--rc genhtml_branch_coverage=1
00:18:07.087  		--rc genhtml_function_coverage=1
00:18:07.087  		--rc genhtml_legend=1
00:18:07.087  		--rc geninfo_all_blocks=1
00:18:07.087  		--rc geninfo_unexecuted_blocks=1
00:18:07.087  		
00:18:07.087  		'
00:18:07.087   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:18:07.087     19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:18:07.087     19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:18:07.087     19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob
00:18:07.087     19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:18:07.087     19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:18:07.087     19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:18:07.087      19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:07.087      19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:07.087      19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:07.087      19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH
00:18:07.087      19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:18:07.087    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:18:07.346    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:18:07.346    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:18:07.346    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:18:07.346  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:18:07.346    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:18:07.346    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:18:07.346    19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0
00:18:07.346   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64
00:18:07.346   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:18:07.346   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0
00:18:07.346   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user
00:18:07.346   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER
00:18:07.346   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER
00:18:07.346   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user
00:18:07.347   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=95367
00:18:07.347   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1
00:18:07.347   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 95367'
00:18:07.347  Process pid: 95367
00:18:07.347   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT
00:18:07.347   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 95367
00:18:07.347   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 95367 ']'
00:18:07.347  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:07.347   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:07.347   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:07.347   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:07.347   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:07.347   19:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:18:07.608   19:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:07.608   19:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0
00:18:07.608   19:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1
00:18:08.545   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER
00:18:08.545   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:08.545   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:18:08.545   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:08.545   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user
00:18:08.545   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0
00:18:08.545   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:08.545   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:18:08.545  malloc0
00:18:08.545   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:08.545   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk
00:18:08.545   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:08.545   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:18:08.545   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:08.545   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0
00:18:08.545   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:08.545   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:18:08.545   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:08.545   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0
00:18:08.545   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:08.545   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:18:08.545   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:08.545   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user'
00:18:08.545   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a
00:18:08.804  Shutting down the fuzz application
00:18:08.804   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0
00:18:08.804   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:08.804   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:18:09.063   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:09.063   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 95367
00:18:09.063   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 95367 ']'
00:18:09.063   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 95367
00:18:09.063    19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname
00:18:09.063   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:09.063    19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95367
00:18:09.063   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:18:09.063  killing process with pid 95367
00:18:09.063   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:18:09.063   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95367'
00:18:09.063   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 95367
00:18:09.063   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 95367
00:18:09.063   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_log.txt /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_tgt_output.txt
00:18:09.063   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT
00:18:09.063  
00:18:09.063  real	0m2.165s
00:18:09.063  user	0m2.214s
00:18:09.063  sys	0m0.359s
00:18:09.063   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:09.063   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:18:09.063  ************************************
00:18:09.063  END TEST nvmf_vfio_user_fuzz
00:18:09.063  ************************************
00:18:09.323   19:02:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp
00:18:09.323   19:02:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:18:09.323   19:02:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:09.323   19:02:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:18:09.323  ************************************
00:18:09.323  START TEST nvmf_auth_target
00:18:09.323  ************************************
00:18:09.323   19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp
00:18:09.323  * Looking for test storage...
00:18:09.323  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:18:09.323    19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:18:09.323     19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version
00:18:09.323     19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:18:09.323    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:18:09.323    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:18:09.323    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l
00:18:09.323    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l
00:18:09.323    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-:
00:18:09.323    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1
00:18:09.323    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-:
00:18:09.323    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2
00:18:09.323    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<'
00:18:09.323    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2
00:18:09.323    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1
00:18:09.323    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:18:09.323    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in
00:18:09.323    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1
00:18:09.323    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 ))
00:18:09.323    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:18:09.323     19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1
00:18:09.323     19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1
00:18:09.323     19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:18:09.323     19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1
00:18:09.323    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1
00:18:09.323     19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2
00:18:09.323     19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2
00:18:09.323     19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:18:09.323     19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2
00:18:09.323    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2
00:18:09.323    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:18:09.323    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:18:09.323    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:18:09.324  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:09.324  		--rc genhtml_branch_coverage=1
00:18:09.324  		--rc genhtml_function_coverage=1
00:18:09.324  		--rc genhtml_legend=1
00:18:09.324  		--rc geninfo_all_blocks=1
00:18:09.324  		--rc geninfo_unexecuted_blocks=1
00:18:09.324  		
00:18:09.324  		'
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:18:09.324  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:09.324  		--rc genhtml_branch_coverage=1
00:18:09.324  		--rc genhtml_function_coverage=1
00:18:09.324  		--rc genhtml_legend=1
00:18:09.324  		--rc geninfo_all_blocks=1
00:18:09.324  		--rc geninfo_unexecuted_blocks=1
00:18:09.324  		
00:18:09.324  		'
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:18:09.324  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:09.324  		--rc genhtml_branch_coverage=1
00:18:09.324  		--rc genhtml_function_coverage=1
00:18:09.324  		--rc genhtml_legend=1
00:18:09.324  		--rc geninfo_all_blocks=1
00:18:09.324  		--rc geninfo_unexecuted_blocks=1
00:18:09.324  		
00:18:09.324  		'
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:18:09.324  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:09.324  		--rc genhtml_branch_coverage=1
00:18:09.324  		--rc genhtml_function_coverage=1
00:18:09.324  		--rc genhtml_legend=1
00:18:09.324  		--rc geninfo_all_blocks=1
00:18:09.324  		--rc geninfo_unexecuted_blocks=1
00:18:09.324  		
00:18:09.324  		'
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:18:09.324     19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:18:09.324     19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:18:09.324     19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob
00:18:09.324     19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:18:09.324     19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:18:09.324     19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:18:09.324      19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:09.324      19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:09.324      19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:09.324      19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH
00:18:09.324      19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:18:09.324  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512")
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192")
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=()
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=()
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:18:09.324    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:18:09.324   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:18:09.325   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:18:09.325   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:18:09.325   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:18:09.325   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:18:09.325   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:18:09.584  Cannot find device "nvmf_init_br"
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:18:09.584  Cannot find device "nvmf_init_br2"
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:18:09.584  Cannot find device "nvmf_tgt_br"
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:18:09.584  Cannot find device "nvmf_tgt_br2"
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:18:09.584  Cannot find device "nvmf_init_br"
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:18:09.584  Cannot find device "nvmf_init_br2"
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:18:09.584  Cannot find device "nvmf_tgt_br"
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:18:09.584  Cannot find device "nvmf_tgt_br2"
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:18:09.584  Cannot find device "nvmf_br"
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:18:09.584  Cannot find device "nvmf_init_if"
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:18:09.584  Cannot find device "nvmf_init_if2"
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:18:09.584  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:18:09.584  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:18:09.584   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:18:09.585   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:18:09.585   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:18:09.585   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:18:09.844  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:18:09.844  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms
00:18:09.844  
00:18:09.844  --- 10.0.0.3 ping statistics ---
00:18:09.844  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:18:09.844  rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:18:09.844  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:18:09.844  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms
00:18:09.844  
00:18:09.844  --- 10.0.0.4 ping statistics ---
00:18:09.844  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:18:09.844  rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:18:09.844  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:18:09.844  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms
00:18:09.844  
00:18:09.844  --- 10.0.0.1 ping statistics ---
00:18:09.844  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:18:09.844  rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:18:09.844  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:18:09.844  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms
00:18:09.844  
00:18:09.844  --- 10.0.0.2 ping statistics ---
00:18:09.844  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:18:09.844  rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=95609
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 95609
00:18:09.844   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 95609 ']'
00:18:09.845   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:09.845   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:09.845   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:09.845   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:09.845   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:10.104   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:10.104   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0
00:18:10.104   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:18:10.104   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable
00:18:10.104   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:10.104   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:18:10.104   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=95634
00:18:10.104   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth
00:18:10.104   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT
00:18:10.104    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48
00:18:10.104    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:18:10.104    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:18:10.104    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:18:10.104    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null
00:18:10.104    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48
00:18:10.104     19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:18:10.104    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=da7b5bb85e9dd0f9f3d3df71ce7fe8a907aa808f3cc6ac46
00:18:10.104     19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX
00:18:10.104    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.FVs
00:18:10.104    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key da7b5bb85e9dd0f9f3d3df71ce7fe8a907aa808f3cc6ac46 0
00:18:10.104    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 da7b5bb85e9dd0f9f3d3df71ce7fe8a907aa808f3cc6ac46 0
00:18:10.104    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:18:10.104    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:18:10.104    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=da7b5bb85e9dd0f9f3d3df71ce7fe8a907aa808f3cc6ac46
00:18:10.104    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0
00:18:10.104    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:18:10.364    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.FVs
00:18:10.364    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.FVs
00:18:10.364   19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.FVs
00:18:10.364    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64
00:18:10.364    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:18:10.364    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:18:10.364    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:18:10.364    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512
00:18:10.364    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64
00:18:10.364     19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom
00:18:10.364    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a31d6b3a7887f12d8e320d7c3950367a2348e57a228bbbac073fc71e4a431acb
00:18:10.364     19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX
00:18:10.364    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.R7p
00:18:10.364    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a31d6b3a7887f12d8e320d7c3950367a2348e57a228bbbac073fc71e4a431acb 3
00:18:10.364    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a31d6b3a7887f12d8e320d7c3950367a2348e57a228bbbac073fc71e4a431acb 3
00:18:10.364    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:18:10.364    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:18:10.364    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a31d6b3a7887f12d8e320d7c3950367a2348e57a228bbbac073fc71e4a431acb
00:18:10.364    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3
00:18:10.364    19:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:18:10.364    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.R7p
00:18:10.364    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.R7p
00:18:10.364   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.R7p
00:18:10.364    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32
00:18:10.364    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:18:10.364    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:18:10.364    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:18:10.364    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256
00:18:10.364    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32
00:18:10.365     19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=31ccd902fe441766ec6ce48881f82bd9
00:18:10.365     19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.JCe
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 31ccd902fe441766ec6ce48881f82bd9 1
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 31ccd902fe441766ec6ce48881f82bd9 1
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=31ccd902fe441766ec6ce48881f82bd9
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.JCe
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.JCe
00:18:10.365   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.JCe
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48
00:18:10.365     19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8d6c131ff790edb647869dee5a2cfa27746b29e2d85286a0
00:18:10.365     19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.OAp
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8d6c131ff790edb647869dee5a2cfa27746b29e2d85286a0 2
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8d6c131ff790edb647869dee5a2cfa27746b29e2d85286a0 2
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8d6c131ff790edb647869dee5a2cfa27746b29e2d85286a0
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.OAp
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.OAp
00:18:10.365   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.OAp
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48
00:18:10.365     19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=319d32cf55640b6799eec20bbf289eefd575a930a0ebdd45
00:18:10.365     19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Pq9
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 319d32cf55640b6799eec20bbf289eefd575a930a0ebdd45 2
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 319d32cf55640b6799eec20bbf289eefd575a930a0ebdd45 2
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=319d32cf55640b6799eec20bbf289eefd575a930a0ebdd45
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2
00:18:10.365    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Pq9
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Pq9
00:18:10.624   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Pq9
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32
00:18:10.624     19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7d74f3033ec3d85f8bb4c389374392f1
00:18:10.624     19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.viX
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7d74f3033ec3d85f8bb4c389374392f1 1
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7d74f3033ec3d85f8bb4c389374392f1 1
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7d74f3033ec3d85f8bb4c389374392f1
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.viX
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.viX
00:18:10.624   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.viX
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64
00:18:10.624     19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b2d027160394f34385a31a51d342bf3edfc61a6702839567437c47ae3b76f2d8
00:18:10.624     19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Zal
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b2d027160394f34385a31a51d342bf3edfc61a6702839567437c47ae3b76f2d8 3
00:18:10.624    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b2d027160394f34385a31a51d342bf3edfc61a6702839567437c47ae3b76f2d8 3
00:18:10.625    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:18:10.625    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:18:10.625    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b2d027160394f34385a31a51d342bf3edfc61a6702839567437c47ae3b76f2d8
00:18:10.625    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3
00:18:10.625    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:18:10.625    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Zal
00:18:10.625    19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Zal
00:18:10.625   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Zal
00:18:10.625   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]=
00:18:10.625   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 95609
00:18:10.625   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 95609 ']'
00:18:10.625   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:10.625   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:10.625   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:10.625  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:10.625   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:10.625   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:10.884   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:10.884   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0
00:18:10.884   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 95634 /var/tmp/host.sock
00:18:10.884   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 95634 ']'
00:18:10.884   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock
00:18:10.884   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:10.884   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...'
00:18:10.884  Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...
00:18:10.884   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:10.884   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:11.451   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:11.451   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0
00:18:11.451   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd
00:18:11.451   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:11.451   19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:11.451   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:11.451   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}"
00:18:11.451   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.FVs
00:18:11.451   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:11.451   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:11.451   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:11.451   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.FVs
00:18:11.451   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.FVs
00:18:11.710   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.R7p ]]
00:18:11.710   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.R7p
00:18:11.710   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:11.710   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:11.710   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:11.710   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.R7p
00:18:11.710   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.R7p
00:18:11.969   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}"
00:18:11.969   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.JCe
00:18:11.969   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:11.969   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:11.969   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:11.969   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.JCe
00:18:11.969   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.JCe
00:18:12.228   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.OAp ]]
00:18:12.228   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OAp
00:18:12.228   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:12.228   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:12.228   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:12.228   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OAp
00:18:12.228   19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OAp
00:18:12.487   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}"
00:18:12.487   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Pq9
00:18:12.487   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:12.487   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:12.487   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:12.487   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Pq9
00:18:12.487   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Pq9
00:18:12.745   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.viX ]]
00:18:12.745   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.viX
00:18:12.745   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:12.745   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:12.745   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:12.745   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.viX
00:18:12.745   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.viX
00:18:13.004   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}"
00:18:13.004   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Zal
00:18:13.004   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:13.004   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:13.004   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:13.004   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Zal
00:18:13.004   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Zal
00:18:13.263   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]]
00:18:13.263   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}"
00:18:13.263   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:18:13.263   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:18:13.263   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:18:13.263   19:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:18:13.522   19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0
00:18:13.522   19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:18:13.522   19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:18:13.522   19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:18:13.522   19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:18:13.522   19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:18:13.522   19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:13.522   19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:13.522   19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:13.522   19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:13.522   19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:13.522   19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:13.522   19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:13.780  
00:18:13.780    19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:18:13.780    19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:18:13.780    19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:18:14.039   19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:18:14.039    19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:18:14.039    19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:14.039    19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:14.039    19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:14.039   19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:18:14.039  {
00:18:14.039  "auth": {
00:18:14.039  "dhgroup": "null",
00:18:14.039  "digest": "sha256",
00:18:14.039  "state": "completed"
00:18:14.039  },
00:18:14.039  "cntlid": 1,
00:18:14.039  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:18:14.039  "listen_address": {
00:18:14.039  "adrfam": "IPv4",
00:18:14.039  "traddr": "10.0.0.3",
00:18:14.039  "trsvcid": "4420",
00:18:14.039  "trtype": "TCP"
00:18:14.039  },
00:18:14.039  "peer_address": {
00:18:14.039  "adrfam": "IPv4",
00:18:14.039  "traddr": "10.0.0.1",
00:18:14.039  "trsvcid": "47430",
00:18:14.039  "trtype": "TCP"
00:18:14.039  },
00:18:14.039  "qid": 0,
00:18:14.039  "state": "enabled",
00:18:14.039  "thread": "nvmf_tgt_poll_group_000"
00:18:14.039  }
00:18:14.039  ]'
00:18:14.039    19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:18:14.039   19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:18:14.039    19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:18:14.310   19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:18:14.310    19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:18:14.310   19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:18:14.310   19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:18:14.310   19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:18:14.582   19:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:18:14.582   19:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:18:18.773   19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:18:18.773  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:18:18.773   19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:18.773   19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:18.773   19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:18.773   19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:18.773   19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:18:18.773   19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:18:18.773   19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:18:18.773   19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1
00:18:18.773   19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:18:18.773   19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:18:18.773   19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:18:18.773   19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:18:18.773   19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:18:18.773   19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:18.773   19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:18.773   19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:18.773   19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:18.773   19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:18.773   19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:18.773   19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:19.032  
00:18:19.032    19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:18:19.032    19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:18:19.032    19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:18:19.291   19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:18:19.291    19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:18:19.291    19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:19.291    19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:19.291    19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:19.291   19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:18:19.291  {
00:18:19.291  "auth": {
00:18:19.291  "dhgroup": "null",
00:18:19.291  "digest": "sha256",
00:18:19.291  "state": "completed"
00:18:19.291  },
00:18:19.291  "cntlid": 3,
00:18:19.291  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:18:19.291  "listen_address": {
00:18:19.291  "adrfam": "IPv4",
00:18:19.291  "traddr": "10.0.0.3",
00:18:19.291  "trsvcid": "4420",
00:18:19.291  "trtype": "TCP"
00:18:19.291  },
00:18:19.291  "peer_address": {
00:18:19.291  "adrfam": "IPv4",
00:18:19.291  "traddr": "10.0.0.1",
00:18:19.291  "trsvcid": "37104",
00:18:19.291  "trtype": "TCP"
00:18:19.291  },
00:18:19.291  "qid": 0,
00:18:19.291  "state": "enabled",
00:18:19.291  "thread": "nvmf_tgt_poll_group_000"
00:18:19.291  }
00:18:19.291  ]'
00:18:19.291    19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:18:19.291   19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:18:19.291    19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:18:19.291   19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:18:19.291    19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:18:19.550   19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:18:19.550   19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:18:19.550   19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:18:19.809   19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:18:19.809   19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:18:20.377   19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:18:20.377  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:18:20.377   19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:20.377   19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:20.377   19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:20.377   19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:20.377   19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:18:20.377   19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:18:20.377   19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:18:20.636   19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2
00:18:20.636   19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:18:20.636   19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:18:20.636   19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:18:20.636   19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:18:20.636   19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:18:20.636   19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:18:20.636   19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:20.636   19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:20.636   19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:20.636   19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:18:20.636   19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:18:20.636   19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:18:20.894  
00:18:20.894    19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:18:20.894    19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:18:20.895    19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:18:21.153   19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:18:21.153    19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:18:21.153    19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:21.153    19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:21.153    19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:21.153   19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:18:21.153  {
00:18:21.153  "auth": {
00:18:21.153  "dhgroup": "null",
00:18:21.153  "digest": "sha256",
00:18:21.153  "state": "completed"
00:18:21.153  },
00:18:21.153  "cntlid": 5,
00:18:21.153  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:18:21.153  "listen_address": {
00:18:21.153  "adrfam": "IPv4",
00:18:21.153  "traddr": "10.0.0.3",
00:18:21.153  "trsvcid": "4420",
00:18:21.153  "trtype": "TCP"
00:18:21.153  },
00:18:21.153  "peer_address": {
00:18:21.153  "adrfam": "IPv4",
00:18:21.154  "traddr": "10.0.0.1",
00:18:21.154  "trsvcid": "37126",
00:18:21.154  "trtype": "TCP"
00:18:21.154  },
00:18:21.154  "qid": 0,
00:18:21.154  "state": "enabled",
00:18:21.154  "thread": "nvmf_tgt_poll_group_000"
00:18:21.154  }
00:18:21.154  ]'
00:18:21.154    19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:18:21.154   19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:18:21.154    19:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:18:21.412   19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:18:21.412    19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:18:21.412   19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:18:21.412   19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:18:21.412   19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:18:21.671   19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:18:21.671   19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:18:22.238   19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:18:22.238  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:18:22.238   19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:22.238   19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:22.238   19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:22.238   19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:22.239   19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:18:22.239   19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:18:22.239   19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:18:22.497   19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3
00:18:22.497   19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:18:22.497   19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:18:22.497   19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:18:22.497   19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:18:22.497   19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:18:22.497   19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key3
00:18:22.497   19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:22.497   19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:22.497   19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:22.497   19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:18:22.497   19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:18:22.497   19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:18:23.065  
00:18:23.065    19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:18:23.065    19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:18:23.065    19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:18:23.324   19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:18:23.324    19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:18:23.324    19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:23.324    19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:23.324    19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:23.324   19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:18:23.324  {
00:18:23.324  "auth": {
00:18:23.324  "dhgroup": "null",
00:18:23.324  "digest": "sha256",
00:18:23.324  "state": "completed"
00:18:23.324  },
00:18:23.324  "cntlid": 7,
00:18:23.324  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:18:23.324  "listen_address": {
00:18:23.324  "adrfam": "IPv4",
00:18:23.324  "traddr": "10.0.0.3",
00:18:23.324  "trsvcid": "4420",
00:18:23.324  "trtype": "TCP"
00:18:23.324  },
00:18:23.324  "peer_address": {
00:18:23.324  "adrfam": "IPv4",
00:18:23.324  "traddr": "10.0.0.1",
00:18:23.324  "trsvcid": "37154",
00:18:23.324  "trtype": "TCP"
00:18:23.324  },
00:18:23.324  "qid": 0,
00:18:23.324  "state": "enabled",
00:18:23.324  "thread": "nvmf_tgt_poll_group_000"
00:18:23.324  }
00:18:23.324  ]'
00:18:23.324    19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:18:23.324   19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:18:23.324    19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:18:23.324   19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:18:23.324    19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:18:23.324   19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:18:23.324   19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:18:23.324   19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:18:23.582   19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:18:23.582   19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:18:24.148   19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:18:24.148  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:18:24.148   19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:24.148   19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:24.148   19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:24.148   19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:24.148   19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:18:24.148   19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:18:24.148   19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:18:24.148   19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:18:24.407   19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0
00:18:24.407   19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:18:24.407   19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:18:24.407   19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:18:24.407   19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:18:24.407   19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:18:24.407   19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:24.407   19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:24.407   19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:24.407   19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:24.407   19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:24.407   19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:24.407   19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:24.974  
00:18:24.974    19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:18:24.974    19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:18:24.974    19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:18:25.233   19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:18:25.233    19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:18:25.233    19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:25.233    19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:25.233    19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:25.233   19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:18:25.233  {
00:18:25.233  "auth": {
00:18:25.233  "dhgroup": "ffdhe2048",
00:18:25.233  "digest": "sha256",
00:18:25.233  "state": "completed"
00:18:25.233  },
00:18:25.233  "cntlid": 9,
00:18:25.233  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:18:25.233  "listen_address": {
00:18:25.233  "adrfam": "IPv4",
00:18:25.233  "traddr": "10.0.0.3",
00:18:25.233  "trsvcid": "4420",
00:18:25.233  "trtype": "TCP"
00:18:25.233  },
00:18:25.233  "peer_address": {
00:18:25.233  "adrfam": "IPv4",
00:18:25.233  "traddr": "10.0.0.1",
00:18:25.233  "trsvcid": "37176",
00:18:25.233  "trtype": "TCP"
00:18:25.233  },
00:18:25.233  "qid": 0,
00:18:25.233  "state": "enabled",
00:18:25.233  "thread": "nvmf_tgt_poll_group_000"
00:18:25.233  }
00:18:25.233  ]'
00:18:25.233    19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:18:25.233   19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:18:25.233    19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:18:25.233   19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:18:25.233    19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:18:25.233   19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:18:25.233   19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:18:25.233   19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:18:25.492   19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:18:25.492   19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:18:26.059   19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:18:26.059  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:18:26.059   19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:26.059   19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:26.059   19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:26.059   19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:26.059   19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:18:26.059   19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:18:26.059   19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:18:26.317   19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1
00:18:26.317   19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:18:26.317   19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:18:26.318   19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:18:26.318   19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:18:26.318   19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:18:26.318   19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:26.318   19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:26.318   19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:26.318   19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:26.318   19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:26.318   19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:26.318   19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:26.576  
00:18:26.576    19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:18:26.576    19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:18:26.576    19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:18:26.834   19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:18:26.834    19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:18:26.834    19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:26.834    19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:26.834    19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:26.834   19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:18:26.834  {
00:18:26.834  "auth": {
00:18:26.834  "dhgroup": "ffdhe2048",
00:18:26.834  "digest": "sha256",
00:18:26.834  "state": "completed"
00:18:26.834  },
00:18:26.834  "cntlid": 11,
00:18:26.834  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:18:26.834  "listen_address": {
00:18:26.834  "adrfam": "IPv4",
00:18:26.834  "traddr": "10.0.0.3",
00:18:26.834  "trsvcid": "4420",
00:18:26.834  "trtype": "TCP"
00:18:26.834  },
00:18:26.834  "peer_address": {
00:18:26.834  "adrfam": "IPv4",
00:18:26.834  "traddr": "10.0.0.1",
00:18:26.834  "trsvcid": "47044",
00:18:26.834  "trtype": "TCP"
00:18:26.834  },
00:18:26.834  "qid": 0,
00:18:26.834  "state": "enabled",
00:18:26.834  "thread": "nvmf_tgt_poll_group_000"
00:18:26.834  }
00:18:26.834  ]'
00:18:26.834    19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:18:27.092   19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:18:27.092    19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:18:27.092   19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:18:27.092    19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:18:27.092   19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:18:27.092   19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:18:27.092   19:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:18:27.350   19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:18:27.350   19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:18:27.916   19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:18:27.916  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:18:27.916   19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:27.916   19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:27.916   19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:27.916   19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:27.916   19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:18:27.916   19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:18:27.916   19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:18:28.188   19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2
00:18:28.188   19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:18:28.188   19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:18:28.188   19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:18:28.188   19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:18:28.188   19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:18:28.188   19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:18:28.188   19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:28.188   19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:28.188   19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:28.188   19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:18:28.188   19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:18:28.188   19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:18:28.769  
00:18:28.769    19:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:18:28.769    19:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:18:28.769    19:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:18:29.028   19:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:18:29.028    19:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:18:29.028    19:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:29.028    19:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:29.028    19:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:29.028   19:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:18:29.028  {
00:18:29.028  "auth": {
00:18:29.028  "dhgroup": "ffdhe2048",
00:18:29.028  "digest": "sha256",
00:18:29.028  "state": "completed"
00:18:29.028  },
00:18:29.028  "cntlid": 13,
00:18:29.028  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:18:29.028  "listen_address": {
00:18:29.028  "adrfam": "IPv4",
00:18:29.028  "traddr": "10.0.0.3",
00:18:29.028  "trsvcid": "4420",
00:18:29.028  "trtype": "TCP"
00:18:29.028  },
00:18:29.028  "peer_address": {
00:18:29.028  "adrfam": "IPv4",
00:18:29.028  "traddr": "10.0.0.1",
00:18:29.028  "trsvcid": "47062",
00:18:29.028  "trtype": "TCP"
00:18:29.028  },
00:18:29.028  "qid": 0,
00:18:29.028  "state": "enabled",
00:18:29.028  "thread": "nvmf_tgt_poll_group_000"
00:18:29.028  }
00:18:29.028  ]'
00:18:29.028    19:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:18:29.028   19:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:18:29.028    19:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:18:29.028   19:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:18:29.028    19:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:18:29.028   19:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:18:29.028   19:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:18:29.028   19:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:18:29.287   19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:18:29.287   19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:18:30.222   19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:18:30.222  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:18:30.223   19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:30.223   19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:30.223   19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:30.223   19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:30.223   19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:18:30.223   19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:18:30.223   19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:18:30.223   19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3
00:18:30.223   19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:18:30.223   19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:18:30.223   19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:18:30.223   19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:18:30.223   19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:18:30.223   19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key3
00:18:30.223   19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:30.223   19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:30.223   19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:30.223   19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:18:30.223   19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:18:30.223   19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:18:30.790  
00:18:30.791    19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:18:30.791    19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:18:30.791    19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:18:31.049   19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:18:31.049    19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:18:31.049    19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:31.049    19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:31.049    19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:31.049   19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:18:31.049  {
00:18:31.049  "auth": {
00:18:31.049  "dhgroup": "ffdhe2048",
00:18:31.049  "digest": "sha256",
00:18:31.049  "state": "completed"
00:18:31.049  },
00:18:31.049  "cntlid": 15,
00:18:31.049  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:18:31.049  "listen_address": {
00:18:31.049  "adrfam": "IPv4",
00:18:31.049  "traddr": "10.0.0.3",
00:18:31.049  "trsvcid": "4420",
00:18:31.049  "trtype": "TCP"
00:18:31.049  },
00:18:31.049  "peer_address": {
00:18:31.049  "adrfam": "IPv4",
00:18:31.049  "traddr": "10.0.0.1",
00:18:31.049  "trsvcid": "47092",
00:18:31.049  "trtype": "TCP"
00:18:31.049  },
00:18:31.049  "qid": 0,
00:18:31.049  "state": "enabled",
00:18:31.049  "thread": "nvmf_tgt_poll_group_000"
00:18:31.049  }
00:18:31.049  ]'
00:18:31.049    19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:18:31.049   19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:18:31.049    19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:18:31.049   19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:18:31.049    19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:18:31.049   19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:18:31.049   19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:18:31.049   19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:18:31.616   19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:18:31.616   19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:18:32.184   19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:18:32.184  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:18:32.184   19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:32.184   19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:32.184   19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:32.184   19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:32.184   19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:18:32.184   19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:18:32.185   19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:18:32.185   19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:18:32.444   19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0
00:18:32.444   19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:18:32.444   19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:18:32.444   19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:18:32.444   19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:18:32.444   19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:18:32.444   19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:32.444   19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:32.444   19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:32.444   19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:32.444   19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:32.444   19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:32.444   19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:32.703  
00:18:32.703    19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:18:32.703    19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:18:32.703    19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:18:32.962   19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:18:32.962    19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:18:32.962    19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:32.962    19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:33.221    19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:33.221   19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:18:33.221  {
00:18:33.221  "auth": {
00:18:33.221  "dhgroup": "ffdhe3072",
00:18:33.221  "digest": "sha256",
00:18:33.221  "state": "completed"
00:18:33.221  },
00:18:33.221  "cntlid": 17,
00:18:33.221  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:18:33.221  "listen_address": {
00:18:33.221  "adrfam": "IPv4",
00:18:33.221  "traddr": "10.0.0.3",
00:18:33.221  "trsvcid": "4420",
00:18:33.221  "trtype": "TCP"
00:18:33.221  },
00:18:33.221  "peer_address": {
00:18:33.221  "adrfam": "IPv4",
00:18:33.221  "traddr": "10.0.0.1",
00:18:33.221  "trsvcid": "47116",
00:18:33.221  "trtype": "TCP"
00:18:33.221  },
00:18:33.221  "qid": 0,
00:18:33.221  "state": "enabled",
00:18:33.221  "thread": "nvmf_tgt_poll_group_000"
00:18:33.221  }
00:18:33.221  ]'
00:18:33.221    19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:18:33.221   19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:18:33.221    19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:18:33.221   19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:18:33.221    19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:18:33.221   19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:18:33.221   19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:18:33.221   19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:18:33.479   19:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:18:33.479   19:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:18:34.413   19:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:18:34.413  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:18:34.413   19:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:34.413   19:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:34.413   19:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:34.413   19:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:34.413   19:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:18:34.413   19:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:18:34.413   19:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:18:34.671   19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1
00:18:34.671   19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:18:34.671   19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:18:34.671   19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:18:34.671   19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:18:34.671   19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:18:34.671   19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:34.671   19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:34.671   19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:34.671   19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:34.671   19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:34.671   19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:34.671   19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:34.930  
00:18:34.930    19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:18:34.930    19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:18:34.930    19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:18:35.188   19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:18:35.188    19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:18:35.188    19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:35.188    19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:35.188    19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:35.188   19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:18:35.188  {
00:18:35.188  "auth": {
00:18:35.188  "dhgroup": "ffdhe3072",
00:18:35.188  "digest": "sha256",
00:18:35.188  "state": "completed"
00:18:35.188  },
00:18:35.188  "cntlid": 19,
00:18:35.188  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:18:35.188  "listen_address": {
00:18:35.188  "adrfam": "IPv4",
00:18:35.188  "traddr": "10.0.0.3",
00:18:35.188  "trsvcid": "4420",
00:18:35.188  "trtype": "TCP"
00:18:35.188  },
00:18:35.188  "peer_address": {
00:18:35.188  "adrfam": "IPv4",
00:18:35.188  "traddr": "10.0.0.1",
00:18:35.188  "trsvcid": "47144",
00:18:35.188  "trtype": "TCP"
00:18:35.188  },
00:18:35.188  "qid": 0,
00:18:35.188  "state": "enabled",
00:18:35.188  "thread": "nvmf_tgt_poll_group_000"
00:18:35.188  }
00:18:35.188  ]'
00:18:35.188    19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:18:35.447   19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:18:35.447    19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:18:35.447   19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:18:35.447    19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:18:35.447   19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:18:35.447   19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:18:35.447   19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:18:35.706   19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:18:35.706   19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:18:36.274   19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:18:36.274  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:18:36.274   19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:36.274   19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:36.274   19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:36.274   19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:36.274   19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:18:36.274   19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:18:36.274   19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:18:36.532   19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2
00:18:36.533   19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:18:36.533   19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:18:36.533   19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:18:36.533   19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:18:36.533   19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:18:36.533   19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:18:36.533   19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:36.533   19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:36.533   19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:36.533   19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:18:36.533   19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:18:36.533   19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:18:36.792  
00:18:37.051    19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:18:37.051    19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:18:37.051    19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:18:37.309   19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:18:37.309    19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:18:37.310    19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:37.310    19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:37.310    19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:37.310   19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:18:37.310  {
00:18:37.310  "auth": {
00:18:37.310  "dhgroup": "ffdhe3072",
00:18:37.310  "digest": "sha256",
00:18:37.310  "state": "completed"
00:18:37.310  },
00:18:37.310  "cntlid": 21,
00:18:37.310  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:18:37.310  "listen_address": {
00:18:37.310  "adrfam": "IPv4",
00:18:37.310  "traddr": "10.0.0.3",
00:18:37.310  "trsvcid": "4420",
00:18:37.310  "trtype": "TCP"
00:18:37.310  },
00:18:37.310  "peer_address": {
00:18:37.310  "adrfam": "IPv4",
00:18:37.310  "traddr": "10.0.0.1",
00:18:37.310  "trsvcid": "58104",
00:18:37.310  "trtype": "TCP"
00:18:37.310  },
00:18:37.310  "qid": 0,
00:18:37.310  "state": "enabled",
00:18:37.310  "thread": "nvmf_tgt_poll_group_000"
00:18:37.310  }
00:18:37.310  ]'
00:18:37.310    19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:18:37.310   19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:18:37.310    19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:18:37.310   19:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:18:37.310    19:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:18:37.310   19:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:18:37.310   19:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:18:37.310   19:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:18:37.568   19:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:18:37.568   19:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:18:38.136   19:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:18:38.136  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:18:38.136   19:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:38.136   19:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:38.136   19:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:38.136   19:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:38.136   19:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:18:38.136   19:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:18:38.136   19:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:18:38.395   19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3
00:18:38.395   19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:18:38.395   19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:18:38.395   19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:18:38.395   19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:18:38.395   19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:18:38.395   19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key3
00:18:38.395   19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:38.395   19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:38.395   19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:38.395   19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:18:38.395   19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:18:38.395   19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:18:38.963  
00:18:38.963    19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:18:38.963    19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:18:38.963    19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:18:39.222   19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:18:39.222    19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:18:39.222    19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:39.222    19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:39.222    19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:39.222   19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:18:39.222  {
00:18:39.222  "auth": {
00:18:39.222  "dhgroup": "ffdhe3072",
00:18:39.222  "digest": "sha256",
00:18:39.222  "state": "completed"
00:18:39.222  },
00:18:39.222  "cntlid": 23,
00:18:39.222  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:18:39.222  "listen_address": {
00:18:39.222  "adrfam": "IPv4",
00:18:39.222  "traddr": "10.0.0.3",
00:18:39.222  "trsvcid": "4420",
00:18:39.222  "trtype": "TCP"
00:18:39.222  },
00:18:39.222  "peer_address": {
00:18:39.222  "adrfam": "IPv4",
00:18:39.222  "traddr": "10.0.0.1",
00:18:39.222  "trsvcid": "58114",
00:18:39.222  "trtype": "TCP"
00:18:39.222  },
00:18:39.222  "qid": 0,
00:18:39.222  "state": "enabled",
00:18:39.222  "thread": "nvmf_tgt_poll_group_000"
00:18:39.222  }
00:18:39.222  ]'
00:18:39.222    19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:18:39.222   19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:18:39.222    19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:18:39.222   19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:18:39.222    19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:18:39.222   19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:18:39.222   19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:18:39.222   19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:18:39.481   19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:18:39.481   19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:18:40.048   19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:18:40.048  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:18:40.048   19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:40.048   19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:40.048   19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:40.048   19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:40.048   19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:18:40.048   19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:18:40.048   19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:18:40.048   19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:18:40.307   19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0
00:18:40.307   19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:18:40.307   19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:18:40.307   19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:18:40.307   19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:18:40.307   19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:18:40.307   19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:40.307   19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:40.307   19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:40.307   19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:40.307   19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:40.307   19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:40.307   19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:40.874  
00:18:40.874    19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:18:40.874    19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:18:40.874    19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:18:41.133   19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:18:41.133    19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:18:41.133    19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:41.133    19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:41.133    19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:41.133   19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:18:41.133  {
00:18:41.133  "auth": {
00:18:41.133  "dhgroup": "ffdhe4096",
00:18:41.133  "digest": "sha256",
00:18:41.133  "state": "completed"
00:18:41.133  },
00:18:41.133  "cntlid": 25,
00:18:41.133  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:18:41.133  "listen_address": {
00:18:41.133  "adrfam": "IPv4",
00:18:41.133  "traddr": "10.0.0.3",
00:18:41.133  "trsvcid": "4420",
00:18:41.133  "trtype": "TCP"
00:18:41.133  },
00:18:41.133  "peer_address": {
00:18:41.133  "adrfam": "IPv4",
00:18:41.133  "traddr": "10.0.0.1",
00:18:41.133  "trsvcid": "58128",
00:18:41.133  "trtype": "TCP"
00:18:41.133  },
00:18:41.133  "qid": 0,
00:18:41.133  "state": "enabled",
00:18:41.133  "thread": "nvmf_tgt_poll_group_000"
00:18:41.133  }
00:18:41.133  ]'
00:18:41.133    19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:18:41.133   19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:18:41.133    19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:18:41.133   19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:18:41.133    19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:18:41.133   19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:18:41.133   19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:18:41.133   19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:18:41.406   19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:18:41.406   19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:18:41.987   19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:18:41.987  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:18:41.987   19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:41.987   19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:41.987   19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:41.987   19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:41.987   19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:18:41.987   19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:18:41.987   19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:18:42.245   19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1
00:18:42.245   19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:18:42.245   19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:18:42.245   19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:18:42.245   19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:18:42.245   19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:18:42.245   19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:42.245   19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:42.245   19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:42.245   19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:42.245   19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:42.245   19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:42.246   19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:42.504  
00:18:42.765    19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:18:42.765    19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:18:42.765    19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:18:43.027   19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:18:43.027    19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:18:43.027    19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:43.027    19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:43.027    19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:43.027   19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:18:43.027  {
00:18:43.027  "auth": {
00:18:43.027  "dhgroup": "ffdhe4096",
00:18:43.027  "digest": "sha256",
00:18:43.027  "state": "completed"
00:18:43.027  },
00:18:43.027  "cntlid": 27,
00:18:43.027  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:18:43.027  "listen_address": {
00:18:43.027  "adrfam": "IPv4",
00:18:43.027  "traddr": "10.0.0.3",
00:18:43.027  "trsvcid": "4420",
00:18:43.027  "trtype": "TCP"
00:18:43.027  },
00:18:43.027  "peer_address": {
00:18:43.027  "adrfam": "IPv4",
00:18:43.027  "traddr": "10.0.0.1",
00:18:43.027  "trsvcid": "58160",
00:18:43.027  "trtype": "TCP"
00:18:43.027  },
00:18:43.027  "qid": 0,
00:18:43.027  "state": "enabled",
00:18:43.027  "thread": "nvmf_tgt_poll_group_000"
00:18:43.027  }
00:18:43.027  ]'
00:18:43.027    19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:18:43.027   19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:18:43.027    19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:18:43.027   19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:18:43.027    19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:18:43.027   19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:18:43.027   19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:18:43.027   19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:18:43.285   19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:18:43.285   19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:18:44.219   19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:18:44.219  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:18:44.219   19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:44.219   19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:44.219   19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:44.219   19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:44.219   19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:18:44.219   19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:18:44.219   19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:18:44.477   19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2
00:18:44.477   19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:18:44.477   19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:18:44.477   19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:18:44.477   19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:18:44.477   19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:18:44.477   19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:18:44.477   19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:44.477   19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:44.477   19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:44.477   19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:18:44.477   19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:18:44.478   19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:18:44.736  
00:18:44.736    19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:18:44.736    19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:18:44.736    19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:18:44.994   19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:18:44.994    19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:18:44.994    19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:44.994    19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:44.994    19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:44.994   19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:18:44.994  {
00:18:44.994  "auth": {
00:18:44.994  "dhgroup": "ffdhe4096",
00:18:44.994  "digest": "sha256",
00:18:44.994  "state": "completed"
00:18:44.994  },
00:18:44.994  "cntlid": 29,
00:18:44.994  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:18:44.994  "listen_address": {
00:18:44.994  "adrfam": "IPv4",
00:18:44.994  "traddr": "10.0.0.3",
00:18:44.994  "trsvcid": "4420",
00:18:44.994  "trtype": "TCP"
00:18:44.994  },
00:18:44.994  "peer_address": {
00:18:44.994  "adrfam": "IPv4",
00:18:44.994  "traddr": "10.0.0.1",
00:18:44.994  "trsvcid": "58186",
00:18:44.994  "trtype": "TCP"
00:18:44.994  },
00:18:44.994  "qid": 0,
00:18:44.994  "state": "enabled",
00:18:44.994  "thread": "nvmf_tgt_poll_group_000"
00:18:44.994  }
00:18:44.994  ]'
00:18:44.994    19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:18:44.994   19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:18:44.994    19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:18:44.994   19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:18:44.994    19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:18:44.994   19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:18:44.994   19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:18:44.994   19:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:18:45.253   19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:18:45.253   19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:18:45.819   19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:18:45.819  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:18:46.077   19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:46.077   19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:46.077   19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:46.077   19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:46.077   19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:18:46.077   19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:18:46.077   19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:18:46.077   19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3
00:18:46.078   19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:18:46.078   19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:18:46.078   19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:18:46.078   19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:18:46.078   19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:18:46.078   19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key3
00:18:46.078   19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:46.078   19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:46.078   19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:46.078   19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:18:46.078   19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:18:46.078   19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:18:46.643  
00:18:46.643    19:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:18:46.643    19:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:18:46.643    19:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:18:46.901   19:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:18:46.901    19:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:18:46.901    19:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:46.901    19:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:46.901    19:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:46.901   19:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:18:46.901  {
00:18:46.901  "auth": {
00:18:46.901  "dhgroup": "ffdhe4096",
00:18:46.901  "digest": "sha256",
00:18:46.901  "state": "completed"
00:18:46.901  },
00:18:46.901  "cntlid": 31,
00:18:46.901  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:18:46.901  "listen_address": {
00:18:46.901  "adrfam": "IPv4",
00:18:46.901  "traddr": "10.0.0.3",
00:18:46.901  "trsvcid": "4420",
00:18:46.901  "trtype": "TCP"
00:18:46.901  },
00:18:46.901  "peer_address": {
00:18:46.901  "adrfam": "IPv4",
00:18:46.901  "traddr": "10.0.0.1",
00:18:46.901  "trsvcid": "52498",
00:18:46.901  "trtype": "TCP"
00:18:46.901  },
00:18:46.901  "qid": 0,
00:18:46.901  "state": "enabled",
00:18:46.901  "thread": "nvmf_tgt_poll_group_000"
00:18:46.901  }
00:18:46.901  ]'
00:18:46.901    19:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:18:46.901   19:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:18:46.901    19:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:18:46.901   19:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:18:46.901    19:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:18:47.160   19:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:18:47.160   19:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:18:47.160   19:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:18:47.418   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:18:47.418   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:18:47.985   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:18:47.985  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:18:47.985   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:47.985   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:47.985   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:47.985   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:47.985   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:18:47.985   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:18:47.985   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:18:47.985   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:18:48.243   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0
00:18:48.243   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:18:48.243   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:18:48.243   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:18:48.243   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:18:48.243   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:18:48.243   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:48.243   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:48.243   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:48.243   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:48.243   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:48.243   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:48.243   19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:48.810  
00:18:48.810    19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:18:48.810    19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:18:48.810    19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:18:49.069   19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:18:49.069    19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:18:49.069    19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:49.069    19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:49.069    19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:49.069   19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:18:49.069  {
00:18:49.069  "auth": {
00:18:49.069  "dhgroup": "ffdhe6144",
00:18:49.069  "digest": "sha256",
00:18:49.069  "state": "completed"
00:18:49.069  },
00:18:49.069  "cntlid": 33,
00:18:49.069  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:18:49.069  "listen_address": {
00:18:49.069  "adrfam": "IPv4",
00:18:49.069  "traddr": "10.0.0.3",
00:18:49.069  "trsvcid": "4420",
00:18:49.069  "trtype": "TCP"
00:18:49.069  },
00:18:49.069  "peer_address": {
00:18:49.069  "adrfam": "IPv4",
00:18:49.069  "traddr": "10.0.0.1",
00:18:49.069  "trsvcid": "52546",
00:18:49.069  "trtype": "TCP"
00:18:49.069  },
00:18:49.069  "qid": 0,
00:18:49.069  "state": "enabled",
00:18:49.069  "thread": "nvmf_tgt_poll_group_000"
00:18:49.069  }
00:18:49.069  ]'
00:18:49.069    19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:18:49.069   19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:18:49.069    19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:18:49.069   19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:18:49.069    19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:18:49.069   19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:18:49.069   19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:18:49.069   19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:18:49.637   19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:18:49.637   19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:18:50.205   19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:18:50.205  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:18:50.205   19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:50.205   19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:50.205   19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:50.205   19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:50.205   19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:18:50.205   19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:18:50.205   19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:18:50.463   19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1
00:18:50.463   19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:18:50.463   19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:18:50.463   19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:18:50.463   19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:18:50.463   19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:18:50.463   19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:50.463   19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:50.463   19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:50.463   19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:50.463   19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:50.463   19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:50.463   19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:51.031  
00:18:51.031    19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:18:51.031    19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:18:51.031    19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:18:51.290   19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:18:51.290    19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:18:51.290    19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:51.290    19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:51.290    19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:51.290   19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:18:51.290  {
00:18:51.290  "auth": {
00:18:51.290  "dhgroup": "ffdhe6144",
00:18:51.290  "digest": "sha256",
00:18:51.290  "state": "completed"
00:18:51.290  },
00:18:51.290  "cntlid": 35,
00:18:51.290  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:18:51.290  "listen_address": {
00:18:51.290  "adrfam": "IPv4",
00:18:51.290  "traddr": "10.0.0.3",
00:18:51.290  "trsvcid": "4420",
00:18:51.290  "trtype": "TCP"
00:18:51.290  },
00:18:51.290  "peer_address": {
00:18:51.290  "adrfam": "IPv4",
00:18:51.290  "traddr": "10.0.0.1",
00:18:51.290  "trsvcid": "52580",
00:18:51.290  "trtype": "TCP"
00:18:51.290  },
00:18:51.290  "qid": 0,
00:18:51.290  "state": "enabled",
00:18:51.290  "thread": "nvmf_tgt_poll_group_000"
00:18:51.290  }
00:18:51.290  ]'
00:18:51.290    19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:18:51.290   19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:18:51.290    19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:18:51.290   19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:18:51.290    19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:18:51.550   19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:18:51.550   19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:18:51.550   19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:18:51.809   19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:18:51.809   19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:18:52.376   19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:18:52.376  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:18:52.376   19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:52.376   19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:52.376   19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:52.376   19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:52.376   19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:18:52.376   19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:18:52.376   19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:18:52.634   19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2
00:18:52.634   19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:18:52.634   19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:18:52.634   19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:18:52.634   19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:18:52.634   19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:18:52.635   19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:18:52.635   19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:52.635   19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:52.635   19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:52.635   19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:18:52.635   19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:18:52.635   19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:18:53.202  
00:18:53.202    19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:18:53.202    19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:18:53.202    19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:18:53.465   19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:18:53.465    19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:18:53.465    19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:53.465    19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:53.465    19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:53.465   19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:18:53.465  {
00:18:53.465  "auth": {
00:18:53.465  "dhgroup": "ffdhe6144",
00:18:53.465  "digest": "sha256",
00:18:53.465  "state": "completed"
00:18:53.465  },
00:18:53.465  "cntlid": 37,
00:18:53.465  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:18:53.465  "listen_address": {
00:18:53.465  "adrfam": "IPv4",
00:18:53.465  "traddr": "10.0.0.3",
00:18:53.465  "trsvcid": "4420",
00:18:53.465  "trtype": "TCP"
00:18:53.465  },
00:18:53.465  "peer_address": {
00:18:53.465  "adrfam": "IPv4",
00:18:53.465  "traddr": "10.0.0.1",
00:18:53.465  "trsvcid": "52596",
00:18:53.465  "trtype": "TCP"
00:18:53.465  },
00:18:53.465  "qid": 0,
00:18:53.465  "state": "enabled",
00:18:53.465  "thread": "nvmf_tgt_poll_group_000"
00:18:53.465  }
00:18:53.465  ]'
00:18:53.465    19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:18:53.465   19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:18:53.465    19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:18:53.465   19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:18:53.465    19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:18:53.465   19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:18:53.465   19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:18:53.466   19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:18:53.725   19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:18:53.725   19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:18:54.297   19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:18:54.297  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:18:54.297   19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:54.297   19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:54.297   19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:54.297   19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:54.297   19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:18:54.297   19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:18:54.297   19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:18:54.566   19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3
00:18:54.566   19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:18:54.566   19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:18:54.566   19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:18:54.566   19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:18:54.566   19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:18:54.566   19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key3
00:18:54.566   19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:54.566   19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:54.566   19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:54.566   19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:18:54.566   19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:18:54.566   19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:18:55.133  
00:18:55.133    19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:18:55.133    19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:18:55.133    19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:18:55.392   19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:18:55.392    19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:18:55.392    19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:55.392    19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:55.392    19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:55.392   19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:18:55.392  {
00:18:55.392  "auth": {
00:18:55.392  "dhgroup": "ffdhe6144",
00:18:55.392  "digest": "sha256",
00:18:55.392  "state": "completed"
00:18:55.392  },
00:18:55.392  "cntlid": 39,
00:18:55.392  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:18:55.392  "listen_address": {
00:18:55.392  "adrfam": "IPv4",
00:18:55.392  "traddr": "10.0.0.3",
00:18:55.392  "trsvcid": "4420",
00:18:55.392  "trtype": "TCP"
00:18:55.392  },
00:18:55.392  "peer_address": {
00:18:55.392  "adrfam": "IPv4",
00:18:55.392  "traddr": "10.0.0.1",
00:18:55.392  "trsvcid": "52628",
00:18:55.392  "trtype": "TCP"
00:18:55.392  },
00:18:55.392  "qid": 0,
00:18:55.392  "state": "enabled",
00:18:55.392  "thread": "nvmf_tgt_poll_group_000"
00:18:55.392  }
00:18:55.392  ]'
00:18:55.392    19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:18:55.392   19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:18:55.392    19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:18:55.651   19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:18:55.651    19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:18:55.651   19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:18:55.651   19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:18:55.651   19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:18:55.910   19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:18:55.910   19:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:18:56.478   19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:18:56.478  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:18:56.478   19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:56.478   19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:56.478   19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:56.478   19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:56.478   19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:18:56.478   19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:18:56.478   19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:18:56.478   19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:18:56.737   19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0
00:18:56.737   19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:18:56.737   19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:18:56.737   19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:18:56.737   19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:18:56.737   19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:18:56.737   19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:56.737   19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:56.737   19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:56.737   19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:56.737   19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:56.737   19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:56.737   19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:18:57.304  
00:18:57.562    19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:18:57.562    19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:18:57.562    19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:18:57.821   19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:18:57.821    19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:18:57.821    19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:57.821    19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:57.821    19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:57.821   19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:18:57.821  {
00:18:57.821  "auth": {
00:18:57.821  "dhgroup": "ffdhe8192",
00:18:57.821  "digest": "sha256",
00:18:57.821  "state": "completed"
00:18:57.821  },
00:18:57.821  "cntlid": 41,
00:18:57.821  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:18:57.821  "listen_address": {
00:18:57.821  "adrfam": "IPv4",
00:18:57.821  "traddr": "10.0.0.3",
00:18:57.821  "trsvcid": "4420",
00:18:57.821  "trtype": "TCP"
00:18:57.821  },
00:18:57.821  "peer_address": {
00:18:57.821  "adrfam": "IPv4",
00:18:57.821  "traddr": "10.0.0.1",
00:18:57.821  "trsvcid": "51170",
00:18:57.821  "trtype": "TCP"
00:18:57.821  },
00:18:57.821  "qid": 0,
00:18:57.821  "state": "enabled",
00:18:57.821  "thread": "nvmf_tgt_poll_group_000"
00:18:57.821  }
00:18:57.821  ]'
00:18:57.821    19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:18:57.821   19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:18:57.821    19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:18:57.821   19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:18:57.821    19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:18:57.821   19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:18:57.821   19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:18:57.821   19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:18:58.080   19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:18:58.080   19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:18:58.646   19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:18:58.646  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:18:58.646   19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:18:58.646   19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:58.646   19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:58.646   19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:58.646   19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:18:58.646   19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:18:58.646   19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:18:58.905   19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1
00:18:58.905   19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:18:58.905   19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:18:58.905   19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:18:58.905   19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:18:58.905   19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:18:58.905   19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:58.905   19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:58.905   19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:59.164   19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:59.164   19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:59.164   19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:59.164   19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:18:59.732  
00:18:59.732    19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:18:59.732    19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:18:59.732    19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:18:59.991   19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:18:59.991    19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:18:59.991    19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:59.991    19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:18:59.991    19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:59.991   19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:18:59.991  {
00:18:59.991  "auth": {
00:18:59.991  "dhgroup": "ffdhe8192",
00:18:59.991  "digest": "sha256",
00:18:59.991  "state": "completed"
00:18:59.991  },
00:18:59.991  "cntlid": 43,
00:18:59.991  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:18:59.991  "listen_address": {
00:18:59.991  "adrfam": "IPv4",
00:18:59.991  "traddr": "10.0.0.3",
00:18:59.991  "trsvcid": "4420",
00:18:59.991  "trtype": "TCP"
00:18:59.991  },
00:18:59.991  "peer_address": {
00:18:59.991  "adrfam": "IPv4",
00:18:59.991  "traddr": "10.0.0.1",
00:18:59.991  "trsvcid": "51200",
00:18:59.991  "trtype": "TCP"
00:18:59.991  },
00:18:59.991  "qid": 0,
00:18:59.991  "state": "enabled",
00:18:59.991  "thread": "nvmf_tgt_poll_group_000"
00:18:59.991  }
00:18:59.991  ]'
00:18:59.991    19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:18:59.991   19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:18:59.991    19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:18:59.991   19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:18:59.991    19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:18:59.991   19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:18:59.991   19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:18:59.992   19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:00.559   19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:19:00.559   19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:19:01.127   19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:01.127  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:01.127   19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:01.127   19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:01.127   19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:01.127   19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:01.127   19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:01.127   19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:19:01.127   19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:19:01.386   19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2
00:19:01.386   19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:01.386   19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:19:01.386   19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:19:01.386   19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:19:01.386   19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:01.386   19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:01.386   19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:01.386   19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:01.386   19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:01.386   19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:01.386   19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:01.386   19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:01.954  
00:19:01.954    19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:01.954    19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:01.954    19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:02.212   19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:02.212    19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:02.212    19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:02.212    19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:02.212    19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:02.212   19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:02.212  {
00:19:02.212  "auth": {
00:19:02.212  "dhgroup": "ffdhe8192",
00:19:02.212  "digest": "sha256",
00:19:02.212  "state": "completed"
00:19:02.212  },
00:19:02.212  "cntlid": 45,
00:19:02.212  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:02.212  "listen_address": {
00:19:02.212  "adrfam": "IPv4",
00:19:02.212  "traddr": "10.0.0.3",
00:19:02.212  "trsvcid": "4420",
00:19:02.212  "trtype": "TCP"
00:19:02.212  },
00:19:02.212  "peer_address": {
00:19:02.212  "adrfam": "IPv4",
00:19:02.212  "traddr": "10.0.0.1",
00:19:02.212  "trsvcid": "51216",
00:19:02.212  "trtype": "TCP"
00:19:02.212  },
00:19:02.212  "qid": 0,
00:19:02.212  "state": "enabled",
00:19:02.212  "thread": "nvmf_tgt_poll_group_000"
00:19:02.212  }
00:19:02.212  ]'
00:19:02.212    19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:02.212   19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:19:02.212    19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:02.212   19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:19:02.212    19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:02.471   19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:02.471   19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:02.471   19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:02.730   19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:19:02.731   19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:19:03.298   19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:03.298  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:03.298   19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:03.298   19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:03.298   19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:03.298   19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:03.298   19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:03.298   19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:19:03.298   19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:19:03.556   19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3
00:19:03.556   19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:03.556   19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:19:03.556   19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:19:03.556   19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:19:03.556   19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:03.556   19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key3
00:19:03.556   19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:03.556   19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:03.556   19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:03.556   19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:19:03.556   19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:19:03.556   19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:19:04.123  
00:19:04.123    19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:04.123    19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:04.123    19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:04.382   19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:04.382    19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:04.382    19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:04.382    19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:04.382    19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:04.382   19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:04.382  {
00:19:04.382  "auth": {
00:19:04.382  "dhgroup": "ffdhe8192",
00:19:04.382  "digest": "sha256",
00:19:04.382  "state": "completed"
00:19:04.382  },
00:19:04.382  "cntlid": 47,
00:19:04.382  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:04.382  "listen_address": {
00:19:04.382  "adrfam": "IPv4",
00:19:04.382  "traddr": "10.0.0.3",
00:19:04.382  "trsvcid": "4420",
00:19:04.382  "trtype": "TCP"
00:19:04.382  },
00:19:04.382  "peer_address": {
00:19:04.382  "adrfam": "IPv4",
00:19:04.382  "traddr": "10.0.0.1",
00:19:04.382  "trsvcid": "51234",
00:19:04.382  "trtype": "TCP"
00:19:04.382  },
00:19:04.382  "qid": 0,
00:19:04.382  "state": "enabled",
00:19:04.382  "thread": "nvmf_tgt_poll_group_000"
00:19:04.382  }
00:19:04.382  ]'
00:19:04.382    19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:04.382   19:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:19:04.382    19:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:04.382   19:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:19:04.382    19:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:04.382   19:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:04.382   19:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:04.382   19:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:04.641   19:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:19:04.641   19:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:19:05.209   19:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:05.209  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:05.209   19:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:05.209   19:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:05.209   19:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:05.209   19:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:05.209   19:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}"
00:19:05.209   19:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:19:05.209   19:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:05.209   19:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:19:05.209   19:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:19:05.468   19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0
00:19:05.468   19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:05.468   19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:05.468   19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:19:05.468   19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:19:05.468   19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:05.468   19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:05.468   19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:05.468   19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:05.468   19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:05.468   19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:05.468   19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:05.468   19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:06.043  
00:19:06.043    19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:06.043    19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:06.043    19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:06.314   19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:06.314    19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:06.314    19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:06.314    19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:06.314    19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:06.314   19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:06.314  {
00:19:06.314  "auth": {
00:19:06.314  "dhgroup": "null",
00:19:06.314  "digest": "sha384",
00:19:06.314  "state": "completed"
00:19:06.314  },
00:19:06.314  "cntlid": 49,
00:19:06.314  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:06.314  "listen_address": {
00:19:06.314  "adrfam": "IPv4",
00:19:06.314  "traddr": "10.0.0.3",
00:19:06.314  "trsvcid": "4420",
00:19:06.314  "trtype": "TCP"
00:19:06.314  },
00:19:06.314  "peer_address": {
00:19:06.314  "adrfam": "IPv4",
00:19:06.314  "traddr": "10.0.0.1",
00:19:06.314  "trsvcid": "47310",
00:19:06.314  "trtype": "TCP"
00:19:06.314  },
00:19:06.314  "qid": 0,
00:19:06.314  "state": "enabled",
00:19:06.314  "thread": "nvmf_tgt_poll_group_000"
00:19:06.314  }
00:19:06.314  ]'
00:19:06.314    19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:06.314   19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:06.314    19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:06.314   19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:19:06.314    19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:06.314   19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:06.314   19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:06.314   19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:06.572   19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:19:06.573   19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:19:07.140   19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:07.140  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:07.140   19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:07.140   19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:07.140   19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:07.140   19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:07.140   19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:07.140   19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:19:07.140   19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:19:07.708   19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1
00:19:07.708   19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:07.708   19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:07.708   19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:19:07.708   19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:19:07.708   19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:07.708   19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:07.708   19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:07.708   19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:07.708   19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:07.708   19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:07.708   19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:07.708   19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:07.967  
00:19:07.967    19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:07.967    19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:07.967    19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:08.226   19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:08.226    19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:08.226    19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:08.226    19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:08.226    19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:08.226   19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:08.226  {
00:19:08.226  "auth": {
00:19:08.226  "dhgroup": "null",
00:19:08.226  "digest": "sha384",
00:19:08.226  "state": "completed"
00:19:08.226  },
00:19:08.226  "cntlid": 51,
00:19:08.226  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:08.226  "listen_address": {
00:19:08.226  "adrfam": "IPv4",
00:19:08.226  "traddr": "10.0.0.3",
00:19:08.226  "trsvcid": "4420",
00:19:08.226  "trtype": "TCP"
00:19:08.226  },
00:19:08.226  "peer_address": {
00:19:08.226  "adrfam": "IPv4",
00:19:08.226  "traddr": "10.0.0.1",
00:19:08.226  "trsvcid": "47340",
00:19:08.226  "trtype": "TCP"
00:19:08.226  },
00:19:08.226  "qid": 0,
00:19:08.226  "state": "enabled",
00:19:08.226  "thread": "nvmf_tgt_poll_group_000"
00:19:08.226  }
00:19:08.226  ]'
00:19:08.226    19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:08.226   19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:08.226    19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:08.226   19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:19:08.226    19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:08.226   19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:08.226   19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:08.226   19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:08.484   19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:19:08.485   19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:19:09.420   19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:09.420  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:09.420   19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:09.420   19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:09.420   19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:09.420   19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:09.420   19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:09.420   19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:19:09.420   19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:19:09.420   19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2
00:19:09.420   19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:09.420   19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:09.420   19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:19:09.420   19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:19:09.420   19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:09.420   19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:09.420   19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:09.420   19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:09.420   19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:09.420   19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:09.420   19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:09.420   19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:09.987  
00:19:09.987    19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:09.987    19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:09.987    19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:10.246   19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:10.246    19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:10.246    19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:10.246    19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:10.246    19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:10.246   19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:10.246  {
00:19:10.246  "auth": {
00:19:10.246  "dhgroup": "null",
00:19:10.246  "digest": "sha384",
00:19:10.246  "state": "completed"
00:19:10.246  },
00:19:10.246  "cntlid": 53,
00:19:10.246  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:10.246  "listen_address": {
00:19:10.246  "adrfam": "IPv4",
00:19:10.246  "traddr": "10.0.0.3",
00:19:10.246  "trsvcid": "4420",
00:19:10.246  "trtype": "TCP"
00:19:10.246  },
00:19:10.247  "peer_address": {
00:19:10.247  "adrfam": "IPv4",
00:19:10.247  "traddr": "10.0.0.1",
00:19:10.247  "trsvcid": "47372",
00:19:10.247  "trtype": "TCP"
00:19:10.247  },
00:19:10.247  "qid": 0,
00:19:10.247  "state": "enabled",
00:19:10.247  "thread": "nvmf_tgt_poll_group_000"
00:19:10.247  }
00:19:10.247  ]'
00:19:10.247    19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:10.247   19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:10.247    19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:10.247   19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:19:10.247    19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:10.247   19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:10.247   19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:10.247   19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:10.505   19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:19:10.505   19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:19:11.442   19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:11.442  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:11.442   19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:11.442   19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:11.442   19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:11.442   19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:11.442   19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:11.442   19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:19:11.442   19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:19:11.442   19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3
00:19:11.442   19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:11.442   19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:11.442   19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:19:11.442   19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:19:11.442   19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:11.442   19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key3
00:19:11.442   19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:11.442   19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:11.442   19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:11.442   19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:19:11.442   19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:19:11.442   19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:19:11.701  
00:19:11.701    19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:11.701    19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:11.701    19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:11.959   19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:11.959    19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:11.959    19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:11.959    19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:12.218    19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:12.218   19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:12.218  {
00:19:12.218  "auth": {
00:19:12.218  "dhgroup": "null",
00:19:12.218  "digest": "sha384",
00:19:12.218  "state": "completed"
00:19:12.218  },
00:19:12.218  "cntlid": 55,
00:19:12.218  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:12.218  "listen_address": {
00:19:12.218  "adrfam": "IPv4",
00:19:12.218  "traddr": "10.0.0.3",
00:19:12.218  "trsvcid": "4420",
00:19:12.218  "trtype": "TCP"
00:19:12.218  },
00:19:12.218  "peer_address": {
00:19:12.218  "adrfam": "IPv4",
00:19:12.218  "traddr": "10.0.0.1",
00:19:12.218  "trsvcid": "47398",
00:19:12.218  "trtype": "TCP"
00:19:12.218  },
00:19:12.218  "qid": 0,
00:19:12.218  "state": "enabled",
00:19:12.218  "thread": "nvmf_tgt_poll_group_000"
00:19:12.218  }
00:19:12.218  ]'
00:19:12.218    19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:12.218   19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:12.218    19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:12.218   19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:19:12.218    19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:12.218   19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:12.218   19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:12.218   19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:12.476   19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:19:12.476   19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:19:13.043   19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:13.043  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:13.043   19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:13.043   19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:13.043   19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:13.043   19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:13.043   19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:19:13.043   19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:13.043   19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:19:13.043   19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:19:13.301   19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0
00:19:13.301   19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:13.301   19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:13.301   19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:19:13.301   19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:19:13.301   19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:13.301   19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:13.301   19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:13.301   19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:13.301   19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:13.301   19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:13.301   19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:13.301   19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:13.559  
00:19:13.559    19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:13.559    19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:13.559    19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:14.125   19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:14.125    19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:14.125    19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:14.125    19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:14.125    19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:14.125   19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:14.125  {
00:19:14.125  "auth": {
00:19:14.125  "dhgroup": "ffdhe2048",
00:19:14.125  "digest": "sha384",
00:19:14.125  "state": "completed"
00:19:14.125  },
00:19:14.125  "cntlid": 57,
00:19:14.125  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:14.125  "listen_address": {
00:19:14.125  "adrfam": "IPv4",
00:19:14.125  "traddr": "10.0.0.3",
00:19:14.125  "trsvcid": "4420",
00:19:14.125  "trtype": "TCP"
00:19:14.125  },
00:19:14.125  "peer_address": {
00:19:14.125  "adrfam": "IPv4",
00:19:14.125  "traddr": "10.0.0.1",
00:19:14.125  "trsvcid": "47434",
00:19:14.125  "trtype": "TCP"
00:19:14.125  },
00:19:14.125  "qid": 0,
00:19:14.125  "state": "enabled",
00:19:14.125  "thread": "nvmf_tgt_poll_group_000"
00:19:14.125  }
00:19:14.125  ]'
00:19:14.125    19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:14.125   19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:14.125    19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:14.125   19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:19:14.125    19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:14.125   19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:14.125   19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:14.125   19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:14.383   19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:19:14.383   19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:19:14.950   19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:14.950  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:14.950   19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:14.950   19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:14.950   19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:14.950   19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:14.950   19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:14.950   19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:19:14.950   19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:19:15.209   19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1
00:19:15.209   19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:15.209   19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:15.209   19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:19:15.209   19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:19:15.209   19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:15.209   19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:15.209   19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:15.209   19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:15.209   19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:15.209   19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:15.209   19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:15.209   19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:15.775  
00:19:15.775    19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:15.775    19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:15.775    19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:16.033   19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:16.033    19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:16.033    19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.033    19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:16.033    19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.033   19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:16.033  {
00:19:16.033  "auth": {
00:19:16.033  "dhgroup": "ffdhe2048",
00:19:16.033  "digest": "sha384",
00:19:16.033  "state": "completed"
00:19:16.033  },
00:19:16.033  "cntlid": 59,
00:19:16.033  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:16.033  "listen_address": {
00:19:16.033  "adrfam": "IPv4",
00:19:16.033  "traddr": "10.0.0.3",
00:19:16.033  "trsvcid": "4420",
00:19:16.033  "trtype": "TCP"
00:19:16.033  },
00:19:16.033  "peer_address": {
00:19:16.033  "adrfam": "IPv4",
00:19:16.033  "traddr": "10.0.0.1",
00:19:16.033  "trsvcid": "44516",
00:19:16.033  "trtype": "TCP"
00:19:16.033  },
00:19:16.033  "qid": 0,
00:19:16.033  "state": "enabled",
00:19:16.033  "thread": "nvmf_tgt_poll_group_000"
00:19:16.033  }
00:19:16.033  ]'
00:19:16.033    19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:16.033   19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:16.033    19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:16.033   19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:19:16.033    19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:16.033   19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:16.033   19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:16.033   19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:16.292   19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:19:16.292   19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:19:16.857   19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:16.857  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:16.858   19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:16.858   19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:16.858   19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:16.858   19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:16.858   19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:16.858   19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:19:16.858   19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:19:17.117   19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2
00:19:17.117   19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:17.117   19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:17.117   19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:19:17.117   19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:19:17.117   19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:17.117   19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:17.117   19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:17.117   19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:17.386   19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:17.387   19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:17.387   19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:17.387   19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:17.646  
00:19:17.646    19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:17.646    19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:17.646    19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:17.904   19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:17.904    19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:17.904    19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:17.904    19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:17.904    19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:17.904   19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:17.904  {
00:19:17.904  "auth": {
00:19:17.904  "dhgroup": "ffdhe2048",
00:19:17.904  "digest": "sha384",
00:19:17.904  "state": "completed"
00:19:17.904  },
00:19:17.904  "cntlid": 61,
00:19:17.904  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:17.904  "listen_address": {
00:19:17.904  "adrfam": "IPv4",
00:19:17.904  "traddr": "10.0.0.3",
00:19:17.904  "trsvcid": "4420",
00:19:17.904  "trtype": "TCP"
00:19:17.904  },
00:19:17.904  "peer_address": {
00:19:17.904  "adrfam": "IPv4",
00:19:17.904  "traddr": "10.0.0.1",
00:19:17.904  "trsvcid": "44538",
00:19:17.904  "trtype": "TCP"
00:19:17.904  },
00:19:17.904  "qid": 0,
00:19:17.904  "state": "enabled",
00:19:17.904  "thread": "nvmf_tgt_poll_group_000"
00:19:17.904  }
00:19:17.904  ]'
00:19:17.904    19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:17.904   19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:17.904    19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:17.904   19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:19:17.904    19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:17.904   19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:17.904   19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:17.904   19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:18.169   19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:19:18.169   19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:19:19.127   19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:19.127  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:19.127   19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:19.127   19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:19.127   19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:19.127   19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:19.127   19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:19.127   19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:19:19.127   19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:19:19.385   19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3
00:19:19.385   19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:19.385   19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:19.385   19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:19:19.385   19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:19:19.385   19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:19.385   19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key3
00:19:19.385   19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:19.385   19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:19.385   19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:19.386   19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:19:19.386   19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:19:19.386   19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:19:19.644  
00:19:19.644    19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:19.644    19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:19.644    19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:19.902   19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:19.902    19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:19.902    19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:19.902    19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:19.902    19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:19.902   19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:19.902  {
00:19:19.902  "auth": {
00:19:19.902  "dhgroup": "ffdhe2048",
00:19:19.902  "digest": "sha384",
00:19:19.902  "state": "completed"
00:19:19.902  },
00:19:19.902  "cntlid": 63,
00:19:19.902  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:19.902  "listen_address": {
00:19:19.902  "adrfam": "IPv4",
00:19:19.902  "traddr": "10.0.0.3",
00:19:19.902  "trsvcid": "4420",
00:19:19.902  "trtype": "TCP"
00:19:19.903  },
00:19:19.903  "peer_address": {
00:19:19.903  "adrfam": "IPv4",
00:19:19.903  "traddr": "10.0.0.1",
00:19:19.903  "trsvcid": "44554",
00:19:19.903  "trtype": "TCP"
00:19:19.903  },
00:19:19.903  "qid": 0,
00:19:19.903  "state": "enabled",
00:19:19.903  "thread": "nvmf_tgt_poll_group_000"
00:19:19.903  }
00:19:19.903  ]'
00:19:19.903    19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:19.903   19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:19.903    19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:19.903   19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:19:19.903    19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:20.161   19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:20.161   19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:20.161   19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:20.421   19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:19:20.421   19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:19:20.987   19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:20.987  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:20.987   19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:20.987   19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:20.987   19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:20.987   19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:20.987   19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:19:20.987   19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:20.987   19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:19:20.987   19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:19:21.246   19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0
00:19:21.246   19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:21.246   19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:21.246   19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:19:21.246   19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:19:21.246   19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:21.246   19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:21.246   19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:21.246   19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:21.246   19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:21.246   19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:21.246   19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:21.246   19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:21.813  
00:19:21.813    19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:21.813    19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:21.813    19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:22.071   19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:22.071    19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:22.071    19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:22.071    19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:22.071    19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:22.071   19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:22.071  {
00:19:22.071  "auth": {
00:19:22.071  "dhgroup": "ffdhe3072",
00:19:22.071  "digest": "sha384",
00:19:22.071  "state": "completed"
00:19:22.071  },
00:19:22.071  "cntlid": 65,
00:19:22.071  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:22.071  "listen_address": {
00:19:22.071  "adrfam": "IPv4",
00:19:22.071  "traddr": "10.0.0.3",
00:19:22.071  "trsvcid": "4420",
00:19:22.071  "trtype": "TCP"
00:19:22.071  },
00:19:22.071  "peer_address": {
00:19:22.071  "adrfam": "IPv4",
00:19:22.071  "traddr": "10.0.0.1",
00:19:22.071  "trsvcid": "44586",
00:19:22.071  "trtype": "TCP"
00:19:22.071  },
00:19:22.071  "qid": 0,
00:19:22.071  "state": "enabled",
00:19:22.071  "thread": "nvmf_tgt_poll_group_000"
00:19:22.071  }
00:19:22.071  ]'
00:19:22.071    19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:22.071   19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:22.071    19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:22.071   19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:19:22.071    19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:22.071   19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:22.071   19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:22.071   19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:22.330   19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:19:22.330   19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:19:23.265   19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:23.265  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:23.265   19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:23.265   19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:23.265   19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:23.265   19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:23.265   19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:23.265   19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:19:23.265   19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:19:23.522   19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1
00:19:23.522   19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:23.522   19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:23.522   19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:19:23.522   19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:19:23.522   19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:23.522   19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:23.522   19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:23.522   19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:23.522   19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:23.522   19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:23.522   19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:23.522   19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:23.780  
00:19:23.780    19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:23.780    19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:23.780    19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:24.039   19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:24.039    19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:24.039    19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:24.039    19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:24.039    19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:24.039   19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:24.039  {
00:19:24.039  "auth": {
00:19:24.039  "dhgroup": "ffdhe3072",
00:19:24.039  "digest": "sha384",
00:19:24.039  "state": "completed"
00:19:24.039  },
00:19:24.039  "cntlid": 67,
00:19:24.039  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:24.039  "listen_address": {
00:19:24.039  "adrfam": "IPv4",
00:19:24.039  "traddr": "10.0.0.3",
00:19:24.039  "trsvcid": "4420",
00:19:24.039  "trtype": "TCP"
00:19:24.039  },
00:19:24.039  "peer_address": {
00:19:24.039  "adrfam": "IPv4",
00:19:24.039  "traddr": "10.0.0.1",
00:19:24.039  "trsvcid": "44606",
00:19:24.039  "trtype": "TCP"
00:19:24.039  },
00:19:24.039  "qid": 0,
00:19:24.039  "state": "enabled",
00:19:24.039  "thread": "nvmf_tgt_poll_group_000"
00:19:24.039  }
00:19:24.039  ]'
00:19:24.039    19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:24.039   19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:24.039    19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:24.298   19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:19:24.298    19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:24.298   19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:24.298   19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:24.298   19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:24.556   19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:19:24.556   19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:19:25.123   19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:25.123  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:25.123   19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:25.123   19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:25.123   19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:25.123   19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:25.123   19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:25.123   19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:19:25.123   19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:19:25.381   19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2
00:19:25.381   19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:25.381   19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:25.381   19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:19:25.381   19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:19:25.381   19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:25.381   19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:25.381   19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:25.381   19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:25.381   19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:25.381   19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:25.381   19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:25.381   19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:25.640  
00:19:25.640    19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:25.640    19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:25.640    19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:26.206   19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:26.206    19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:26.206    19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:26.206    19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:26.206    19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:26.206   19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:26.206  {
00:19:26.206  "auth": {
00:19:26.206  "dhgroup": "ffdhe3072",
00:19:26.206  "digest": "sha384",
00:19:26.206  "state": "completed"
00:19:26.206  },
00:19:26.206  "cntlid": 69,
00:19:26.206  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:26.206  "listen_address": {
00:19:26.206  "adrfam": "IPv4",
00:19:26.206  "traddr": "10.0.0.3",
00:19:26.206  "trsvcid": "4420",
00:19:26.206  "trtype": "TCP"
00:19:26.206  },
00:19:26.206  "peer_address": {
00:19:26.206  "adrfam": "IPv4",
00:19:26.206  "traddr": "10.0.0.1",
00:19:26.206  "trsvcid": "36868",
00:19:26.206  "trtype": "TCP"
00:19:26.206  },
00:19:26.206  "qid": 0,
00:19:26.206  "state": "enabled",
00:19:26.206  "thread": "nvmf_tgt_poll_group_000"
00:19:26.206  }
00:19:26.206  ]'
00:19:26.206    19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:26.206   19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:26.206    19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:26.207   19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:19:26.207    19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:26.207   19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:26.207   19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:26.207   19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:26.465   19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:19:26.465   19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:19:27.032   19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:27.290  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:27.290   19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:27.290   19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:27.290   19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:27.290   19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:27.290   19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:27.290   19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:19:27.290   19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:19:27.549   19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3
00:19:27.549   19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:27.549   19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:27.549   19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:19:27.549   19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:19:27.549   19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:27.549   19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key3
00:19:27.549   19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:27.549   19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:27.549   19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:27.549   19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:19:27.549   19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:19:27.549   19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:19:27.807  
00:19:27.807    19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:27.807    19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:27.808    19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:28.066   19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:28.066    19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:28.066    19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:28.066    19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:28.066    19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:28.066   19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:28.066  {
00:19:28.066  "auth": {
00:19:28.066  "dhgroup": "ffdhe3072",
00:19:28.066  "digest": "sha384",
00:19:28.066  "state": "completed"
00:19:28.066  },
00:19:28.066  "cntlid": 71,
00:19:28.066  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:28.066  "listen_address": {
00:19:28.066  "adrfam": "IPv4",
00:19:28.066  "traddr": "10.0.0.3",
00:19:28.066  "trsvcid": "4420",
00:19:28.066  "trtype": "TCP"
00:19:28.066  },
00:19:28.066  "peer_address": {
00:19:28.066  "adrfam": "IPv4",
00:19:28.066  "traddr": "10.0.0.1",
00:19:28.066  "trsvcid": "36896",
00:19:28.066  "trtype": "TCP"
00:19:28.066  },
00:19:28.066  "qid": 0,
00:19:28.066  "state": "enabled",
00:19:28.066  "thread": "nvmf_tgt_poll_group_000"
00:19:28.066  }
00:19:28.066  ]'
00:19:28.066    19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:28.325   19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:28.325    19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:28.325   19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:19:28.325    19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:28.325   19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:28.325   19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:28.325   19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:28.583   19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:19:28.583   19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:19:29.520   19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:29.520  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:29.520   19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:29.520   19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:29.520   19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:29.520   19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:29.520   19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:19:29.520   19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:29.520   19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:19:29.520   19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:19:29.777   19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0
00:19:29.777   19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:29.777   19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:29.777   19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:19:29.777   19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:19:29.777   19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:29.777   19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:29.777   19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:29.777   19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:29.777   19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:29.777   19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:29.778   19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:29.778   19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:30.036  
00:19:30.036    19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:30.036    19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:30.036    19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:30.295   19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:30.295    19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:30.295    19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:30.295    19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:30.295    19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:30.295   19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:30.295  {
00:19:30.295  "auth": {
00:19:30.295  "dhgroup": "ffdhe4096",
00:19:30.295  "digest": "sha384",
00:19:30.295  "state": "completed"
00:19:30.295  },
00:19:30.295  "cntlid": 73,
00:19:30.295  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:30.295  "listen_address": {
00:19:30.295  "adrfam": "IPv4",
00:19:30.295  "traddr": "10.0.0.3",
00:19:30.295  "trsvcid": "4420",
00:19:30.295  "trtype": "TCP"
00:19:30.295  },
00:19:30.295  "peer_address": {
00:19:30.295  "adrfam": "IPv4",
00:19:30.295  "traddr": "10.0.0.1",
00:19:30.295  "trsvcid": "36932",
00:19:30.295  "trtype": "TCP"
00:19:30.295  },
00:19:30.295  "qid": 0,
00:19:30.295  "state": "enabled",
00:19:30.295  "thread": "nvmf_tgt_poll_group_000"
00:19:30.295  }
00:19:30.295  ]'
00:19:30.295    19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:30.554   19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:30.554    19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:30.554   19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:19:30.554    19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:30.554   19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:30.554   19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:30.554   19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:30.813   19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:19:30.813   19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:19:31.379   19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:31.379  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:31.379   19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:31.379   19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:31.379   19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:31.379   19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:31.380   19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:31.380   19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:19:31.380   19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:19:31.638   19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1
00:19:31.639   19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:31.639   19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:31.639   19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:19:31.639   19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:19:31.639   19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:31.639   19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:31.639   19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:31.639   19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:31.639   19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:31.639   19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:31.639   19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:31.639   19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:32.209  
00:19:32.209    19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:32.209    19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:32.209    19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:32.209   19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:32.209    19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:32.209    19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:32.209    19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:32.209    19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:32.209   19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:32.209  {
00:19:32.209  "auth": {
00:19:32.209  "dhgroup": "ffdhe4096",
00:19:32.209  "digest": "sha384",
00:19:32.209  "state": "completed"
00:19:32.209  },
00:19:32.209  "cntlid": 75,
00:19:32.209  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:32.209  "listen_address": {
00:19:32.209  "adrfam": "IPv4",
00:19:32.209  "traddr": "10.0.0.3",
00:19:32.209  "trsvcid": "4420",
00:19:32.209  "trtype": "TCP"
00:19:32.209  },
00:19:32.209  "peer_address": {
00:19:32.209  "adrfam": "IPv4",
00:19:32.209  "traddr": "10.0.0.1",
00:19:32.209  "trsvcid": "36956",
00:19:32.209  "trtype": "TCP"
00:19:32.209  },
00:19:32.209  "qid": 0,
00:19:32.209  "state": "enabled",
00:19:32.209  "thread": "nvmf_tgt_poll_group_000"
00:19:32.209  }
00:19:32.209  ]'
00:19:32.209    19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:32.470   19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:32.470    19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:32.470   19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:19:32.470    19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:32.470   19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:32.470   19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:32.470   19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:32.729   19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:19:32.729   19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:19:33.296   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:33.296  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:33.296   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:33.296   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:33.296   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:33.296   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:33.296   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:33.296   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:19:33.296   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:19:33.554   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2
00:19:33.554   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:33.554   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:33.554   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:19:33.554   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:19:33.554   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:33.554   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:33.554   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:33.554   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:33.554   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:33.554   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:33.554   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:33.554   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:34.120  
00:19:34.120    19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:34.121    19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:34.121    19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:34.121   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:34.121    19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:34.121    19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:34.121    19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:34.121    19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:34.121   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:34.121  {
00:19:34.121  "auth": {
00:19:34.121  "dhgroup": "ffdhe4096",
00:19:34.121  "digest": "sha384",
00:19:34.121  "state": "completed"
00:19:34.121  },
00:19:34.121  "cntlid": 77,
00:19:34.121  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:34.121  "listen_address": {
00:19:34.121  "adrfam": "IPv4",
00:19:34.121  "traddr": "10.0.0.3",
00:19:34.121  "trsvcid": "4420",
00:19:34.121  "trtype": "TCP"
00:19:34.121  },
00:19:34.121  "peer_address": {
00:19:34.121  "adrfam": "IPv4",
00:19:34.121  "traddr": "10.0.0.1",
00:19:34.121  "trsvcid": "36978",
00:19:34.121  "trtype": "TCP"
00:19:34.121  },
00:19:34.121  "qid": 0,
00:19:34.121  "state": "enabled",
00:19:34.121  "thread": "nvmf_tgt_poll_group_000"
00:19:34.121  }
00:19:34.121  ]'
00:19:34.380    19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:34.380   19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:34.380    19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:34.380   19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:19:34.380    19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:34.380   19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:34.380   19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:34.380   19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:34.639   19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:19:34.639   19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:19:35.206   19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:35.206  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:35.206   19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:35.206   19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:35.206   19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:35.206   19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:35.206   19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:35.206   19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:19:35.206   19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:19:35.465   19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3
00:19:35.465   19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:35.465   19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:35.465   19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:19:35.465   19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:19:35.465   19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:35.465   19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key3
00:19:35.465   19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:35.465   19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:35.465   19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:35.466   19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:19:35.466   19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:19:35.466   19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:19:36.033  
00:19:36.033    19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:36.033    19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:36.033    19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:36.292   19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:36.292    19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:36.292    19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:36.292    19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:36.292    19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:36.292   19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:36.292  {
00:19:36.292  "auth": {
00:19:36.292  "dhgroup": "ffdhe4096",
00:19:36.292  "digest": "sha384",
00:19:36.292  "state": "completed"
00:19:36.292  },
00:19:36.292  "cntlid": 79,
00:19:36.292  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:36.292  "listen_address": {
00:19:36.292  "adrfam": "IPv4",
00:19:36.292  "traddr": "10.0.0.3",
00:19:36.292  "trsvcid": "4420",
00:19:36.292  "trtype": "TCP"
00:19:36.292  },
00:19:36.292  "peer_address": {
00:19:36.292  "adrfam": "IPv4",
00:19:36.292  "traddr": "10.0.0.1",
00:19:36.292  "trsvcid": "33876",
00:19:36.292  "trtype": "TCP"
00:19:36.292  },
00:19:36.292  "qid": 0,
00:19:36.292  "state": "enabled",
00:19:36.292  "thread": "nvmf_tgt_poll_group_000"
00:19:36.292  }
00:19:36.292  ]'
00:19:36.292    19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:36.292   19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:36.292    19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:36.292   19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:19:36.292    19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:36.292   19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:36.292   19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:36.292   19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:36.551   19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:19:36.551   19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:19:37.118   19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:37.118  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:37.118   19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:37.118   19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:37.118   19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:37.118   19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:37.118   19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:19:37.118   19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:37.118   19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:19:37.118   19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:19:37.377   19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0
00:19:37.377   19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:37.377   19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:37.377   19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:19:37.377   19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:19:37.377   19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:37.377   19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:37.377   19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:37.377   19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:37.377   19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:37.377   19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:37.377   19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:37.377   19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:37.945  
00:19:37.945    19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:37.945    19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:37.945    19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:38.203   19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:38.203    19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:38.203    19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:38.203    19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:38.203    19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:38.203   19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:38.203  {
00:19:38.203  "auth": {
00:19:38.203  "dhgroup": "ffdhe6144",
00:19:38.203  "digest": "sha384",
00:19:38.203  "state": "completed"
00:19:38.203  },
00:19:38.203  "cntlid": 81,
00:19:38.203  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:38.203  "listen_address": {
00:19:38.203  "adrfam": "IPv4",
00:19:38.203  "traddr": "10.0.0.3",
00:19:38.203  "trsvcid": "4420",
00:19:38.203  "trtype": "TCP"
00:19:38.203  },
00:19:38.204  "peer_address": {
00:19:38.204  "adrfam": "IPv4",
00:19:38.204  "traddr": "10.0.0.1",
00:19:38.204  "trsvcid": "33900",
00:19:38.204  "trtype": "TCP"
00:19:38.204  },
00:19:38.204  "qid": 0,
00:19:38.204  "state": "enabled",
00:19:38.204  "thread": "nvmf_tgt_poll_group_000"
00:19:38.204  }
00:19:38.204  ]'
00:19:38.204    19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:38.204   19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:38.204    19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:38.204   19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:19:38.204    19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:38.462   19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:38.462   19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:38.462   19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:38.721   19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:19:38.721   19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:19:39.288   19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:39.288  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:39.288   19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:39.288   19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:39.288   19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:39.288   19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:39.288   19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:39.288   19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:19:39.288   19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:19:39.547   19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1
00:19:39.547   19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:39.547   19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:39.547   19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:19:39.547   19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:19:39.547   19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:39.547   19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:39.547   19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:39.547   19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:39.547   19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:39.547   19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:39.547   19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:39.547   19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:40.116  
00:19:40.116    19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:40.116    19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:40.116    19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:40.374   19:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:40.374    19:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:40.374    19:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:40.374    19:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:40.374    19:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:40.374   19:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:40.374  {
00:19:40.374  "auth": {
00:19:40.374  "dhgroup": "ffdhe6144",
00:19:40.374  "digest": "sha384",
00:19:40.374  "state": "completed"
00:19:40.374  },
00:19:40.374  "cntlid": 83,
00:19:40.374  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:40.374  "listen_address": {
00:19:40.374  "adrfam": "IPv4",
00:19:40.374  "traddr": "10.0.0.3",
00:19:40.374  "trsvcid": "4420",
00:19:40.374  "trtype": "TCP"
00:19:40.374  },
00:19:40.374  "peer_address": {
00:19:40.374  "adrfam": "IPv4",
00:19:40.374  "traddr": "10.0.0.1",
00:19:40.374  "trsvcid": "33930",
00:19:40.374  "trtype": "TCP"
00:19:40.374  },
00:19:40.374  "qid": 0,
00:19:40.374  "state": "enabled",
00:19:40.374  "thread": "nvmf_tgt_poll_group_000"
00:19:40.374  }
00:19:40.374  ]'
00:19:40.374    19:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:40.633   19:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:40.633    19:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:40.633   19:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:19:40.633    19:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:40.633   19:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:40.633   19:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:40.633   19:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:40.893   19:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:19:40.893   19:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:19:41.460   19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:41.460  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:41.460   19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:41.460   19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:41.460   19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:41.719   19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:41.719   19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:41.719   19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:19:41.719   19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:19:41.719   19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2
00:19:41.719   19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:41.719   19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:41.719   19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:19:41.719   19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:19:41.719   19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:41.719   19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:41.719   19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:41.719   19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:41.719   19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:41.719   19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:41.719   19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:41.719   19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:42.287  
00:19:42.287    19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:42.287    19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:42.287    19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:42.545   19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:42.545    19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:42.545    19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:42.545    19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:42.545    19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:42.545   19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:42.545  {
00:19:42.545  "auth": {
00:19:42.545  "dhgroup": "ffdhe6144",
00:19:42.545  "digest": "sha384",
00:19:42.545  "state": "completed"
00:19:42.545  },
00:19:42.545  "cntlid": 85,
00:19:42.545  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:42.545  "listen_address": {
00:19:42.545  "adrfam": "IPv4",
00:19:42.545  "traddr": "10.0.0.3",
00:19:42.545  "trsvcid": "4420",
00:19:42.545  "trtype": "TCP"
00:19:42.546  },
00:19:42.546  "peer_address": {
00:19:42.546  "adrfam": "IPv4",
00:19:42.546  "traddr": "10.0.0.1",
00:19:42.546  "trsvcid": "33954",
00:19:42.546  "trtype": "TCP"
00:19:42.546  },
00:19:42.546  "qid": 0,
00:19:42.546  "state": "enabled",
00:19:42.546  "thread": "nvmf_tgt_poll_group_000"
00:19:42.546  }
00:19:42.546  ]'
00:19:42.546    19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:42.546   19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:42.546    19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:42.804   19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:19:42.804    19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:42.804   19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:42.804   19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:42.804   19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:43.062   19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:19:43.062   19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:19:43.646   19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:43.646  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:43.646   19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:43.646   19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:43.646   19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:43.646   19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:43.646   19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:43.646   19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:19:43.646   19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:19:43.909   19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3
00:19:43.910   19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:43.910   19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:43.910   19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:19:43.910   19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:19:43.910   19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:43.910   19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key3
00:19:43.910   19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:43.910   19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:43.910   19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:43.910   19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:19:43.910   19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:19:43.910   19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:19:44.476  
00:19:44.476    19:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:44.476    19:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:44.476    19:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:44.735   19:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:44.735    19:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:44.735    19:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:44.735    19:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:44.735    19:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:44.735   19:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:44.735  {
00:19:44.735  "auth": {
00:19:44.735  "dhgroup": "ffdhe6144",
00:19:44.735  "digest": "sha384",
00:19:44.735  "state": "completed"
00:19:44.735  },
00:19:44.735  "cntlid": 87,
00:19:44.735  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:44.735  "listen_address": {
00:19:44.735  "adrfam": "IPv4",
00:19:44.735  "traddr": "10.0.0.3",
00:19:44.735  "trsvcid": "4420",
00:19:44.735  "trtype": "TCP"
00:19:44.735  },
00:19:44.735  "peer_address": {
00:19:44.735  "adrfam": "IPv4",
00:19:44.735  "traddr": "10.0.0.1",
00:19:44.735  "trsvcid": "33978",
00:19:44.735  "trtype": "TCP"
00:19:44.735  },
00:19:44.735  "qid": 0,
00:19:44.735  "state": "enabled",
00:19:44.735  "thread": "nvmf_tgt_poll_group_000"
00:19:44.735  }
00:19:44.735  ]'
00:19:44.735    19:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:44.735   19:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:44.994    19:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:44.994   19:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:19:44.994    19:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:44.994   19:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:44.994   19:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:44.994   19:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:45.253   19:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:19:45.253   19:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:19:45.820   19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:45.820  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:45.820   19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:45.820   19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:45.820   19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:45.820   19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:45.820   19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:19:45.820   19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:45.820   19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:19:45.820   19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:19:46.079   19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0
00:19:46.079   19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:46.079   19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:46.079   19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:19:46.079   19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:19:46.079   19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:46.079   19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:46.079   19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:46.079   19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:46.079   19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:46.079   19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:46.079   19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:46.079   19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:46.646  
00:19:46.646    19:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:46.646    19:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:46.646    19:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:46.904   19:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:46.904    19:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:46.904    19:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:46.904    19:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:46.904    19:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:46.904   19:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:46.904  {
00:19:46.904  "auth": {
00:19:46.904  "dhgroup": "ffdhe8192",
00:19:46.904  "digest": "sha384",
00:19:46.904  "state": "completed"
00:19:46.904  },
00:19:46.904  "cntlid": 89,
00:19:46.904  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:46.904  "listen_address": {
00:19:46.904  "adrfam": "IPv4",
00:19:46.904  "traddr": "10.0.0.3",
00:19:46.904  "trsvcid": "4420",
00:19:46.904  "trtype": "TCP"
00:19:46.904  },
00:19:46.904  "peer_address": {
00:19:46.904  "adrfam": "IPv4",
00:19:46.904  "traddr": "10.0.0.1",
00:19:46.904  "trsvcid": "53046",
00:19:46.904  "trtype": "TCP"
00:19:46.904  },
00:19:46.904  "qid": 0,
00:19:46.904  "state": "enabled",
00:19:46.904  "thread": "nvmf_tgt_poll_group_000"
00:19:46.904  }
00:19:46.904  ]'
00:19:46.904    19:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:46.904   19:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:46.904    19:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:47.164   19:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:19:47.164    19:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:47.164   19:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:47.164   19:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:47.164   19:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:47.423   19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:19:47.423   19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:19:47.989   19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:47.989  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:47.989   19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:47.989   19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:47.989   19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:47.989   19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:47.989   19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:47.989   19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:19:47.989   19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:19:48.247   19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1
00:19:48.247   19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:48.247   19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:48.247   19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:19:48.248   19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:19:48.248   19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:48.248   19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:48.248   19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:48.248   19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:48.248   19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:48.248   19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:48.248   19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:48.248   19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:48.814  
00:19:48.814    19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:48.814    19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:48.814    19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:49.072   19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:49.072    19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:49.072    19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:49.072    19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:49.072    19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:49.072   19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:49.072  {
00:19:49.072  "auth": {
00:19:49.072  "dhgroup": "ffdhe8192",
00:19:49.072  "digest": "sha384",
00:19:49.072  "state": "completed"
00:19:49.072  },
00:19:49.072  "cntlid": 91,
00:19:49.072  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:49.072  "listen_address": {
00:19:49.072  "adrfam": "IPv4",
00:19:49.072  "traddr": "10.0.0.3",
00:19:49.072  "trsvcid": "4420",
00:19:49.072  "trtype": "TCP"
00:19:49.072  },
00:19:49.072  "peer_address": {
00:19:49.072  "adrfam": "IPv4",
00:19:49.072  "traddr": "10.0.0.1",
00:19:49.072  "trsvcid": "53080",
00:19:49.072  "trtype": "TCP"
00:19:49.072  },
00:19:49.072  "qid": 0,
00:19:49.072  "state": "enabled",
00:19:49.072  "thread": "nvmf_tgt_poll_group_000"
00:19:49.072  }
00:19:49.072  ]'
00:19:49.072    19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:49.331   19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:49.331    19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:49.331   19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:19:49.331    19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:49.331   19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:49.331   19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:49.331   19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:49.590   19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:19:49.590   19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:19:50.157   19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:50.157  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:50.157   19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:50.157   19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:50.157   19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:50.157   19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:50.157   19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:50.157   19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:19:50.157   19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:19:50.416   19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2
00:19:50.416   19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:50.416   19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:50.416   19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:19:50.416   19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:19:50.416   19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:50.416   19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:50.416   19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:50.416   19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:50.416   19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:50.416   19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:50.416   19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:50.416   19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:50.983  
00:19:51.242    19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:51.242    19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:51.242    19:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:51.500   19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:51.500    19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:51.500    19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:51.500    19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:51.500    19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:51.500   19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:51.500  {
00:19:51.500  "auth": {
00:19:51.500  "dhgroup": "ffdhe8192",
00:19:51.500  "digest": "sha384",
00:19:51.501  "state": "completed"
00:19:51.501  },
00:19:51.501  "cntlid": 93,
00:19:51.501  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:51.501  "listen_address": {
00:19:51.501  "adrfam": "IPv4",
00:19:51.501  "traddr": "10.0.0.3",
00:19:51.501  "trsvcid": "4420",
00:19:51.501  "trtype": "TCP"
00:19:51.501  },
00:19:51.501  "peer_address": {
00:19:51.501  "adrfam": "IPv4",
00:19:51.501  "traddr": "10.0.0.1",
00:19:51.501  "trsvcid": "53090",
00:19:51.501  "trtype": "TCP"
00:19:51.501  },
00:19:51.501  "qid": 0,
00:19:51.501  "state": "enabled",
00:19:51.501  "thread": "nvmf_tgt_poll_group_000"
00:19:51.501  }
00:19:51.501  ]'
00:19:51.501    19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:51.501   19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:51.501    19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:51.501   19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:19:51.501    19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:51.501   19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:51.501   19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:51.501   19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:51.759   19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:19:51.759   19:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:19:52.326   19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:52.326  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:52.326   19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:52.327   19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:52.327   19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:52.327   19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:52.327   19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:52.327   19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:19:52.327   19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:19:52.894   19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3
00:19:52.894   19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:52.894   19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:19:52.894   19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:19:52.894   19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:19:52.894   19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:52.894   19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key3
00:19:52.894   19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:52.894   19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:52.894   19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:52.894   19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:19:52.894   19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:19:52.894   19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:19:53.460  
00:19:53.460    19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:53.460    19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:53.460    19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:53.718   19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:53.718    19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:53.718    19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:53.718    19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:53.718    19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:53.718   19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:53.718  {
00:19:53.718  "auth": {
00:19:53.718  "dhgroup": "ffdhe8192",
00:19:53.718  "digest": "sha384",
00:19:53.718  "state": "completed"
00:19:53.718  },
00:19:53.718  "cntlid": 95,
00:19:53.718  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:53.718  "listen_address": {
00:19:53.718  "adrfam": "IPv4",
00:19:53.718  "traddr": "10.0.0.3",
00:19:53.718  "trsvcid": "4420",
00:19:53.718  "trtype": "TCP"
00:19:53.718  },
00:19:53.718  "peer_address": {
00:19:53.718  "adrfam": "IPv4",
00:19:53.718  "traddr": "10.0.0.1",
00:19:53.718  "trsvcid": "53124",
00:19:53.718  "trtype": "TCP"
00:19:53.718  },
00:19:53.718  "qid": 0,
00:19:53.718  "state": "enabled",
00:19:53.718  "thread": "nvmf_tgt_poll_group_000"
00:19:53.718  }
00:19:53.718  ]'
00:19:53.718    19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:53.718   19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:19:53.718    19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:53.718   19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:19:53.718    19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:53.718   19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:53.718   19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:53.718   19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:53.977   19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:19:53.977   19:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:19:54.912   19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:54.912  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:54.912   19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:54.912   19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:54.912   19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:54.912   19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:54.912   19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}"
00:19:54.912   19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:19:54.912   19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:54.912   19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:19:54.912   19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:19:54.912   19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0
00:19:54.912   19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:54.912   19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:19:54.912   19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:19:54.912   19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:19:54.912   19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:54.912   19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:54.912   19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:54.912   19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:54.912   19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:54.912   19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:54.912   19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:54.912   19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:19:55.171  
00:19:55.171    19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:55.171    19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:55.171    19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:55.429   19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:55.429    19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:55.429    19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:55.429    19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:55.429    19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:55.429   19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:55.429  {
00:19:55.429  "auth": {
00:19:55.429  "dhgroup": "null",
00:19:55.429  "digest": "sha512",
00:19:55.429  "state": "completed"
00:19:55.429  },
00:19:55.429  "cntlid": 97,
00:19:55.429  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:55.429  "listen_address": {
00:19:55.429  "adrfam": "IPv4",
00:19:55.429  "traddr": "10.0.0.3",
00:19:55.429  "trsvcid": "4420",
00:19:55.429  "trtype": "TCP"
00:19:55.429  },
00:19:55.429  "peer_address": {
00:19:55.429  "adrfam": "IPv4",
00:19:55.429  "traddr": "10.0.0.1",
00:19:55.429  "trsvcid": "53146",
00:19:55.429  "trtype": "TCP"
00:19:55.429  },
00:19:55.429  "qid": 0,
00:19:55.429  "state": "enabled",
00:19:55.429  "thread": "nvmf_tgt_poll_group_000"
00:19:55.429  }
00:19:55.429  ]'
00:19:55.429    19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:55.693   19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:19:55.693    19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:55.693   19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:19:55.693    19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:55.693   19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:55.693   19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:55.693   19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:55.952   19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:19:55.952   19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:19:56.520   19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:56.520  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:56.520   19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:56.520   19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:56.520   19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:56.520   19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:56.520   19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:56.520   19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:19:56.520   19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:19:57.087   19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1
00:19:57.087   19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:57.087   19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:19:57.087   19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:19:57.087   19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:19:57.087   19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:57.087   19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:57.087   19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:57.087   19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:57.087   19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:57.087   19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:57.087   19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:57.087   19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:19:57.346  
00:19:57.346    19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:57.346    19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:57.346    19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:57.605   19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:57.605    19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:57.605    19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:57.605    19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:57.605    19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:57.605   19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:57.605  {
00:19:57.605  "auth": {
00:19:57.605  "dhgroup": "null",
00:19:57.605  "digest": "sha512",
00:19:57.605  "state": "completed"
00:19:57.605  },
00:19:57.605  "cntlid": 99,
00:19:57.605  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:57.605  "listen_address": {
00:19:57.605  "adrfam": "IPv4",
00:19:57.605  "traddr": "10.0.0.3",
00:19:57.605  "trsvcid": "4420",
00:19:57.605  "trtype": "TCP"
00:19:57.605  },
00:19:57.605  "peer_address": {
00:19:57.605  "adrfam": "IPv4",
00:19:57.605  "traddr": "10.0.0.1",
00:19:57.605  "trsvcid": "33904",
00:19:57.605  "trtype": "TCP"
00:19:57.605  },
00:19:57.605  "qid": 0,
00:19:57.605  "state": "enabled",
00:19:57.605  "thread": "nvmf_tgt_poll_group_000"
00:19:57.605  }
00:19:57.605  ]'
00:19:57.605    19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:57.605   19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:19:57.605    19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:57.605   19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:19:57.605    19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:57.605   19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:57.605   19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:57.605   19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:19:57.863   19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:19:57.863   19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:19:58.797   19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:19:58.797  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:19:58.797   19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:19:58.797   19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:58.797   19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:58.797   19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:58.797   19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:19:58.797   19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:19:58.797   19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:19:59.056   19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2
00:19:59.056   19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:19:59.056   19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:19:59.056   19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:19:59.056   19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:19:59.056   19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:19:59.056   19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:59.056   19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:59.056   19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:59.056   19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:59.056   19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:59.056   19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:59.056   19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:19:59.315  
00:19:59.315    19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:19:59.315    19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:19:59.315    19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:19:59.574   19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:19:59.574    19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:19:59.574    19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:59.574    19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:19:59.574    19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:59.574   19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:19:59.574  {
00:19:59.574  "auth": {
00:19:59.574  "dhgroup": "null",
00:19:59.574  "digest": "sha512",
00:19:59.574  "state": "completed"
00:19:59.574  },
00:19:59.574  "cntlid": 101,
00:19:59.574  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:19:59.574  "listen_address": {
00:19:59.574  "adrfam": "IPv4",
00:19:59.574  "traddr": "10.0.0.3",
00:19:59.574  "trsvcid": "4420",
00:19:59.574  "trtype": "TCP"
00:19:59.574  },
00:19:59.574  "peer_address": {
00:19:59.574  "adrfam": "IPv4",
00:19:59.574  "traddr": "10.0.0.1",
00:19:59.574  "trsvcid": "33932",
00:19:59.574  "trtype": "TCP"
00:19:59.574  },
00:19:59.574  "qid": 0,
00:19:59.574  "state": "enabled",
00:19:59.574  "thread": "nvmf_tgt_poll_group_000"
00:19:59.574  }
00:19:59.574  ]'
00:19:59.574    19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:19:59.574   19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:19:59.574    19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:19:59.832   19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:19:59.832    19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:19:59.832   19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:19:59.832   19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:19:59.832   19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:00.090   19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:20:00.090   19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:20:00.657   19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:00.657  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:00.657   19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:00.657   19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:00.657   19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:00.657   19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:00.657   19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:00.657   19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:20:00.657   19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:20:00.916   19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3
00:20:00.916   19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:00.916   19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:20:00.916   19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:20:00.916   19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:20:00.916   19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:00.916   19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key3
00:20:00.916   19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:00.916   19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:00.916   19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:00.916   19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:20:00.916   19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:00.916   19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:01.175  
00:20:01.175    19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:01.175    19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:01.175    19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:01.434   19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:01.434    19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:01.434    19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:01.434    19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:01.434    19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:01.434   19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:01.434  {
00:20:01.434  "auth": {
00:20:01.434  "dhgroup": "null",
00:20:01.434  "digest": "sha512",
00:20:01.434  "state": "completed"
00:20:01.434  },
00:20:01.434  "cntlid": 103,
00:20:01.434  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:01.434  "listen_address": {
00:20:01.434  "adrfam": "IPv4",
00:20:01.434  "traddr": "10.0.0.3",
00:20:01.434  "trsvcid": "4420",
00:20:01.434  "trtype": "TCP"
00:20:01.434  },
00:20:01.434  "peer_address": {
00:20:01.434  "adrfam": "IPv4",
00:20:01.434  "traddr": "10.0.0.1",
00:20:01.434  "trsvcid": "33960",
00:20:01.434  "trtype": "TCP"
00:20:01.434  },
00:20:01.434  "qid": 0,
00:20:01.434  "state": "enabled",
00:20:01.434  "thread": "nvmf_tgt_poll_group_000"
00:20:01.434  }
00:20:01.434  ]'
00:20:01.434    19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:01.434   19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:20:01.434    19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:01.693   19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:20:01.693    19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:01.693   19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:01.693   19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:01.693   19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:01.952   19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:20:01.952   19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:20:02.520   19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:02.520  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:02.520   19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:02.520   19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:02.520   19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:02.520   19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:02.520   19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:20:02.520   19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:02.520   19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:20:02.520   19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:20:03.087   19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0
00:20:03.087   19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:03.087   19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:20:03.087   19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:20:03.087   19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:20:03.087   19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:03.087   19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:03.087   19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:03.087   19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:03.087   19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:03.087   19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:03.087   19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:03.087   19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:03.346  
00:20:03.346    19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:03.346    19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:03.346    19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:03.604   19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:03.604    19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:03.604    19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:03.604    19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:03.604    19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:03.604   19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:03.604  {
00:20:03.604  "auth": {
00:20:03.604  "dhgroup": "ffdhe2048",
00:20:03.604  "digest": "sha512",
00:20:03.604  "state": "completed"
00:20:03.604  },
00:20:03.604  "cntlid": 105,
00:20:03.604  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:03.604  "listen_address": {
00:20:03.604  "adrfam": "IPv4",
00:20:03.604  "traddr": "10.0.0.3",
00:20:03.604  "trsvcid": "4420",
00:20:03.604  "trtype": "TCP"
00:20:03.604  },
00:20:03.604  "peer_address": {
00:20:03.604  "adrfam": "IPv4",
00:20:03.604  "traddr": "10.0.0.1",
00:20:03.604  "trsvcid": "33994",
00:20:03.604  "trtype": "TCP"
00:20:03.604  },
00:20:03.604  "qid": 0,
00:20:03.604  "state": "enabled",
00:20:03.604  "thread": "nvmf_tgt_poll_group_000"
00:20:03.604  }
00:20:03.604  ]'
00:20:03.604    19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:03.604   19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:20:03.604    19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:03.604   19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:20:03.604    19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:03.604   19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:03.604   19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:03.604   19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:03.863   19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:20:03.863   19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:20:04.429   19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:04.687  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:04.687   19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:04.687   19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:04.687   19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:04.687   19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:04.687   19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:04.687   19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:20:04.687   19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:20:04.946   19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1
00:20:04.946   19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:04.946   19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:20:04.946   19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:20:04.946   19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:20:04.946   19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:04.946   19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:04.946   19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:04.946   19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:04.946   19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:04.946   19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:04.946   19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:04.946   19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:05.204  
00:20:05.204    19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:05.204    19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:05.204    19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:05.463   19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:05.463    19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:05.463    19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:05.463    19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:05.463    19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:05.463   19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:05.463  {
00:20:05.463  "auth": {
00:20:05.463  "dhgroup": "ffdhe2048",
00:20:05.463  "digest": "sha512",
00:20:05.463  "state": "completed"
00:20:05.463  },
00:20:05.463  "cntlid": 107,
00:20:05.463  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:05.463  "listen_address": {
00:20:05.463  "adrfam": "IPv4",
00:20:05.463  "traddr": "10.0.0.3",
00:20:05.463  "trsvcid": "4420",
00:20:05.463  "trtype": "TCP"
00:20:05.463  },
00:20:05.463  "peer_address": {
00:20:05.463  "adrfam": "IPv4",
00:20:05.463  "traddr": "10.0.0.1",
00:20:05.463  "trsvcid": "34004",
00:20:05.463  "trtype": "TCP"
00:20:05.463  },
00:20:05.463  "qid": 0,
00:20:05.463  "state": "enabled",
00:20:05.463  "thread": "nvmf_tgt_poll_group_000"
00:20:05.463  }
00:20:05.463  ]'
00:20:05.463    19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:05.721   19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:20:05.721    19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:05.721   19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:20:05.721    19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:05.721   19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:05.721   19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:05.721   19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:05.980   19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:20:05.980   19:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:20:06.548   19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:06.548  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:06.548   19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:06.548   19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:06.548   19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:06.548   19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:06.548   19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:06.548   19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:20:06.548   19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:20:06.824   19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2
00:20:06.824   19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:06.824   19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:20:06.824   19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:20:06.824   19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:20:06.824   19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:06.824   19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:06.824   19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:06.824   19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:06.824   19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:06.824   19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:06.824   19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:06.824   19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:07.109  
00:20:07.109    19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:07.109    19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:07.109    19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:07.367   19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:07.367    19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:07.367    19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:07.367    19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:07.367    19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:07.367   19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:07.367  {
00:20:07.367  "auth": {
00:20:07.367  "dhgroup": "ffdhe2048",
00:20:07.367  "digest": "sha512",
00:20:07.367  "state": "completed"
00:20:07.367  },
00:20:07.367  "cntlid": 109,
00:20:07.367  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:07.367  "listen_address": {
00:20:07.367  "adrfam": "IPv4",
00:20:07.367  "traddr": "10.0.0.3",
00:20:07.367  "trsvcid": "4420",
00:20:07.367  "trtype": "TCP"
00:20:07.367  },
00:20:07.367  "peer_address": {
00:20:07.367  "adrfam": "IPv4",
00:20:07.367  "traddr": "10.0.0.1",
00:20:07.367  "trsvcid": "37748",
00:20:07.367  "trtype": "TCP"
00:20:07.367  },
00:20:07.367  "qid": 0,
00:20:07.367  "state": "enabled",
00:20:07.367  "thread": "nvmf_tgt_poll_group_000"
00:20:07.367  }
00:20:07.367  ]'
00:20:07.367    19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:07.367   19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:20:07.367    19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:07.367   19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:20:07.367    19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:07.625   19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:07.625   19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:07.625   19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:07.625   19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:20:07.625   19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:20:08.192   19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:08.192  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:08.192   19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:08.192   19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:08.192   19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:08.192   19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:08.192   19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:08.192   19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:20:08.192   19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:20:08.450   19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3
00:20:08.450   19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:08.450   19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:20:08.450   19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:20:08.450   19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:20:08.450   19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:08.450   19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key3
00:20:08.450   19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:08.450   19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:08.450   19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:08.450   19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:20:08.450   19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:08.450   19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:08.709  
00:20:08.967    19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:08.967    19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:08.967    19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:08.967   19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:08.967    19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:08.967    19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:08.967    19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:09.226    19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:09.226   19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:09.226  {
00:20:09.226  "auth": {
00:20:09.226  "dhgroup": "ffdhe2048",
00:20:09.226  "digest": "sha512",
00:20:09.226  "state": "completed"
00:20:09.226  },
00:20:09.226  "cntlid": 111,
00:20:09.226  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:09.226  "listen_address": {
00:20:09.226  "adrfam": "IPv4",
00:20:09.226  "traddr": "10.0.0.3",
00:20:09.226  "trsvcid": "4420",
00:20:09.226  "trtype": "TCP"
00:20:09.226  },
00:20:09.226  "peer_address": {
00:20:09.226  "adrfam": "IPv4",
00:20:09.226  "traddr": "10.0.0.1",
00:20:09.226  "trsvcid": "37784",
00:20:09.226  "trtype": "TCP"
00:20:09.226  },
00:20:09.226  "qid": 0,
00:20:09.226  "state": "enabled",
00:20:09.226  "thread": "nvmf_tgt_poll_group_000"
00:20:09.226  }
00:20:09.226  ]'
00:20:09.226    19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:09.226   19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:20:09.226    19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:09.226   19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:20:09.226    19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:09.226   19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:09.226   19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:09.226   19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:09.485   19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:20:09.485   19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:20:10.052   19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:10.052  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:10.052   19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:10.052   19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:10.052   19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:10.052   19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:10.052   19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:20:10.052   19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:10.052   19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:20:10.052   19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:20:10.310   19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0
00:20:10.310   19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:10.310   19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:20:10.311   19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:20:10.311   19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:20:10.311   19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:10.311   19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:10.311   19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:10.311   19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:10.311   19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:10.311   19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:10.311   19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:10.311   19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:10.878  
00:20:10.878    19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:10.878    19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:10.878    19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:11.136   19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:11.136    19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:11.136    19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:11.136    19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:11.136    19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:11.136   19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:11.136  {
00:20:11.136  "auth": {
00:20:11.136  "dhgroup": "ffdhe3072",
00:20:11.136  "digest": "sha512",
00:20:11.136  "state": "completed"
00:20:11.136  },
00:20:11.136  "cntlid": 113,
00:20:11.136  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:11.136  "listen_address": {
00:20:11.136  "adrfam": "IPv4",
00:20:11.136  "traddr": "10.0.0.3",
00:20:11.136  "trsvcid": "4420",
00:20:11.136  "trtype": "TCP"
00:20:11.136  },
00:20:11.136  "peer_address": {
00:20:11.136  "adrfam": "IPv4",
00:20:11.136  "traddr": "10.0.0.1",
00:20:11.136  "trsvcid": "37820",
00:20:11.136  "trtype": "TCP"
00:20:11.136  },
00:20:11.136  "qid": 0,
00:20:11.136  "state": "enabled",
00:20:11.136  "thread": "nvmf_tgt_poll_group_000"
00:20:11.136  }
00:20:11.136  ]'
00:20:11.136    19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:11.136   19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:20:11.136    19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:11.136   19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:20:11.136    19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:11.136   19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:11.136   19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:11.136   19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:11.703   19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:20:11.703   19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:20:12.270   19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:12.270  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:12.270   19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:12.270   19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:12.270   19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:12.270   19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:12.270   19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:12.270   19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:20:12.270   19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:20:12.529   19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1
00:20:12.529   19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:12.529   19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:20:12.529   19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:20:12.529   19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:20:12.529   19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:12.529   19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:12.529   19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:12.529   19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:12.529   19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:12.529   19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:12.529   19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:12.529   19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:13.096  
00:20:13.096    19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:13.096    19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:13.096    19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:13.355   19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:13.355    19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:13.355    19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:13.355    19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:13.355    19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:13.355   19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:13.355  {
00:20:13.355  "auth": {
00:20:13.355  "dhgroup": "ffdhe3072",
00:20:13.355  "digest": "sha512",
00:20:13.355  "state": "completed"
00:20:13.355  },
00:20:13.355  "cntlid": 115,
00:20:13.355  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:13.355  "listen_address": {
00:20:13.355  "adrfam": "IPv4",
00:20:13.355  "traddr": "10.0.0.3",
00:20:13.355  "trsvcid": "4420",
00:20:13.355  "trtype": "TCP"
00:20:13.355  },
00:20:13.355  "peer_address": {
00:20:13.355  "adrfam": "IPv4",
00:20:13.355  "traddr": "10.0.0.1",
00:20:13.355  "trsvcid": "37858",
00:20:13.355  "trtype": "TCP"
00:20:13.355  },
00:20:13.355  "qid": 0,
00:20:13.355  "state": "enabled",
00:20:13.355  "thread": "nvmf_tgt_poll_group_000"
00:20:13.355  }
00:20:13.355  ]'
00:20:13.355    19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:13.355   19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:20:13.355    19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:13.355   19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:20:13.355    19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:13.355   19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:13.355   19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:13.355   19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:13.614   19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:20:13.614   19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:20:14.179   19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:14.179  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:14.179   19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:14.180   19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:14.180   19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:14.438   19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:14.438   19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:14.438   19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:20:14.438   19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:20:14.438   19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2
00:20:14.438   19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:14.438   19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:20:14.438   19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:20:14.438   19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:20:14.438   19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:14.438   19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:14.438   19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:14.438   19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:14.438   19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:14.438   19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:14.439   19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:14.439   19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:15.005  
00:20:15.005    19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:15.005    19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:15.005    19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:15.263   19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:15.263    19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:15.263    19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:15.263    19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:15.263    19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:15.263   19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:15.263  {
00:20:15.263  "auth": {
00:20:15.263  "dhgroup": "ffdhe3072",
00:20:15.263  "digest": "sha512",
00:20:15.263  "state": "completed"
00:20:15.263  },
00:20:15.263  "cntlid": 117,
00:20:15.263  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:15.263  "listen_address": {
00:20:15.263  "adrfam": "IPv4",
00:20:15.263  "traddr": "10.0.0.3",
00:20:15.263  "trsvcid": "4420",
00:20:15.263  "trtype": "TCP"
00:20:15.263  },
00:20:15.263  "peer_address": {
00:20:15.263  "adrfam": "IPv4",
00:20:15.263  "traddr": "10.0.0.1",
00:20:15.263  "trsvcid": "37892",
00:20:15.263  "trtype": "TCP"
00:20:15.263  },
00:20:15.263  "qid": 0,
00:20:15.263  "state": "enabled",
00:20:15.263  "thread": "nvmf_tgt_poll_group_000"
00:20:15.263  }
00:20:15.263  ]'
00:20:15.263    19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:15.263   19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:20:15.263    19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:15.263   19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:20:15.263    19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:15.263   19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:15.263   19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:15.263   19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:15.522   19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:20:15.522   19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:20:16.455   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:16.455  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:16.455   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:16.455   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:16.455   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:16.455   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:16.455   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:16.455   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:20:16.455   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:20:16.455   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3
00:20:16.455   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:16.455   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:20:16.455   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:20:16.455   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:20:16.455   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:16.455   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key3
00:20:16.455   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:16.455   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:16.455   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:16.455   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:20:16.455   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:16.455   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:17.021  
00:20:17.021    19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:17.021    19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:17.021    19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:17.021   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:17.021    19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:17.021    19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:17.021    19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:17.021    19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:17.021   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:17.021  {
00:20:17.021  "auth": {
00:20:17.021  "dhgroup": "ffdhe3072",
00:20:17.021  "digest": "sha512",
00:20:17.021  "state": "completed"
00:20:17.021  },
00:20:17.021  "cntlid": 119,
00:20:17.021  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:17.021  "listen_address": {
00:20:17.021  "adrfam": "IPv4",
00:20:17.021  "traddr": "10.0.0.3",
00:20:17.021  "trsvcid": "4420",
00:20:17.021  "trtype": "TCP"
00:20:17.021  },
00:20:17.021  "peer_address": {
00:20:17.021  "adrfam": "IPv4",
00:20:17.021  "traddr": "10.0.0.1",
00:20:17.021  "trsvcid": "52836",
00:20:17.021  "trtype": "TCP"
00:20:17.021  },
00:20:17.021  "qid": 0,
00:20:17.021  "state": "enabled",
00:20:17.021  "thread": "nvmf_tgt_poll_group_000"
00:20:17.021  }
00:20:17.021  ]'
00:20:17.021    19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:17.280   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:20:17.280    19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:17.280   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:20:17.280    19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:17.280   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:17.280   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:17.280   19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:17.538   19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:20:17.538   19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:20:18.105   19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:18.105  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:18.363   19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:18.363   19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:18.363   19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:18.363   19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:18.363   19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:20:18.363   19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:18.363   19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:20:18.363   19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:20:18.621   19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0
00:20:18.621   19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:18.621   19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:20:18.621   19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:20:18.621   19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:20:18.621   19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:18.621   19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:18.621   19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:18.621   19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:18.621   19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:18.621   19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:18.621   19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:18.621   19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:18.879  
00:20:18.879    19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:18.879    19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:18.879    19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:19.137   19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:19.137    19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:19.137    19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:19.137    19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:19.137    19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:19.137   19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:19.137  {
00:20:19.137  "auth": {
00:20:19.137  "dhgroup": "ffdhe4096",
00:20:19.137  "digest": "sha512",
00:20:19.137  "state": "completed"
00:20:19.137  },
00:20:19.137  "cntlid": 121,
00:20:19.137  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:19.137  "listen_address": {
00:20:19.137  "adrfam": "IPv4",
00:20:19.137  "traddr": "10.0.0.3",
00:20:19.137  "trsvcid": "4420",
00:20:19.137  "trtype": "TCP"
00:20:19.137  },
00:20:19.137  "peer_address": {
00:20:19.137  "adrfam": "IPv4",
00:20:19.137  "traddr": "10.0.0.1",
00:20:19.137  "trsvcid": "52866",
00:20:19.137  "trtype": "TCP"
00:20:19.137  },
00:20:19.137  "qid": 0,
00:20:19.137  "state": "enabled",
00:20:19.137  "thread": "nvmf_tgt_poll_group_000"
00:20:19.137  }
00:20:19.137  ]'
00:20:19.137    19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:19.137   19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:20:19.137    19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:19.394   19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:20:19.394    19:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:19.394   19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:19.394   19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:19.394   19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:19.652   19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:20:19.652   19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:20:20.217   19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:20.217  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:20.217   19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:20.217   19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:20.217   19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:20.217   19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:20.217   19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:20.217   19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:20:20.217   19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:20:20.475   19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1
00:20:20.475   19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:20.475   19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:20:20.475   19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:20:20.475   19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:20:20.475   19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:20.475   19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:20.475   19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:20.475   19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:20.475   19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:20.475   19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:20.475   19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:20.475   19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:21.040  
00:20:21.040    19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:21.040    19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:21.040    19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:21.298   19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:21.298    19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:21.298    19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:21.298    19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:21.298    19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:21.298   19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:21.298  {
00:20:21.298  "auth": {
00:20:21.298  "dhgroup": "ffdhe4096",
00:20:21.298  "digest": "sha512",
00:20:21.298  "state": "completed"
00:20:21.298  },
00:20:21.298  "cntlid": 123,
00:20:21.298  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:21.298  "listen_address": {
00:20:21.298  "adrfam": "IPv4",
00:20:21.298  "traddr": "10.0.0.3",
00:20:21.298  "trsvcid": "4420",
00:20:21.298  "trtype": "TCP"
00:20:21.298  },
00:20:21.298  "peer_address": {
00:20:21.298  "adrfam": "IPv4",
00:20:21.298  "traddr": "10.0.0.1",
00:20:21.298  "trsvcid": "52894",
00:20:21.298  "trtype": "TCP"
00:20:21.298  },
00:20:21.298  "qid": 0,
00:20:21.298  "state": "enabled",
00:20:21.298  "thread": "nvmf_tgt_poll_group_000"
00:20:21.298  }
00:20:21.298  ]'
00:20:21.298    19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:21.298   19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:20:21.298    19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:21.298   19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:20:21.298    19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:21.298   19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:21.298   19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:21.298   19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:21.557   19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:20:21.557   19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:20:22.123   19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:22.123  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:22.123   19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:22.123   19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:22.123   19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:22.123   19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:22.123   19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:22.123   19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:20:22.123   19:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:20:22.382   19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2
00:20:22.382   19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:22.382   19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:20:22.382   19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:20:22.382   19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:20:22.382   19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:22.382   19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:22.382   19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:22.382   19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:22.382   19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:22.382   19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:22.382   19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:22.382   19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:22.641  
00:20:22.641    19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:22.641    19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:22.641    19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:23.208   19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:23.208    19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:23.208    19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:23.208    19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:23.208    19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:23.208   19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:23.208  {
00:20:23.208  "auth": {
00:20:23.208  "dhgroup": "ffdhe4096",
00:20:23.208  "digest": "sha512",
00:20:23.208  "state": "completed"
00:20:23.208  },
00:20:23.208  "cntlid": 125,
00:20:23.208  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:23.208  "listen_address": {
00:20:23.208  "adrfam": "IPv4",
00:20:23.208  "traddr": "10.0.0.3",
00:20:23.208  "trsvcid": "4420",
00:20:23.208  "trtype": "TCP"
00:20:23.208  },
00:20:23.208  "peer_address": {
00:20:23.208  "adrfam": "IPv4",
00:20:23.208  "traddr": "10.0.0.1",
00:20:23.208  "trsvcid": "52912",
00:20:23.208  "trtype": "TCP"
00:20:23.208  },
00:20:23.208  "qid": 0,
00:20:23.208  "state": "enabled",
00:20:23.208  "thread": "nvmf_tgt_poll_group_000"
00:20:23.208  }
00:20:23.208  ]'
00:20:23.208    19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:23.208   19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:20:23.208    19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:23.208   19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:20:23.208    19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:23.208   19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:23.208   19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:23.208   19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:23.511   19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:20:23.511   19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:20:24.111   19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:24.369  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:24.369   19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:24.369   19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:24.369   19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:24.369   19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:24.369   19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:24.369   19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:20:24.369   19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:20:24.627   19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3
00:20:24.627   19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:24.627   19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:20:24.627   19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:20:24.627   19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:20:24.627   19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:24.627   19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key3
00:20:24.627   19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:24.627   19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:24.627   19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:24.627   19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:20:24.627   19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:24.627   19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:24.885  
00:20:24.885    19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:24.885    19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:24.885    19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:25.451   19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:25.451    19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:25.451    19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:25.451    19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:25.451    19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:25.451   19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:25.451  {
00:20:25.451  "auth": {
00:20:25.451  "dhgroup": "ffdhe4096",
00:20:25.451  "digest": "sha512",
00:20:25.451  "state": "completed"
00:20:25.451  },
00:20:25.451  "cntlid": 127,
00:20:25.451  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:25.451  "listen_address": {
00:20:25.451  "adrfam": "IPv4",
00:20:25.451  "traddr": "10.0.0.3",
00:20:25.451  "trsvcid": "4420",
00:20:25.451  "trtype": "TCP"
00:20:25.451  },
00:20:25.451  "peer_address": {
00:20:25.451  "adrfam": "IPv4",
00:20:25.451  "traddr": "10.0.0.1",
00:20:25.451  "trsvcid": "52942",
00:20:25.451  "trtype": "TCP"
00:20:25.451  },
00:20:25.451  "qid": 0,
00:20:25.451  "state": "enabled",
00:20:25.451  "thread": "nvmf_tgt_poll_group_000"
00:20:25.451  }
00:20:25.451  ]'
00:20:25.451    19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:25.451   19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:20:25.451    19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:25.451   19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:20:25.451    19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:25.451   19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:25.451   19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:25.451   19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:25.709   19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:20:25.709   19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:20:26.644   19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:26.644  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:26.644   19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:26.644   19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:26.644   19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:26.644   19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:26.644   19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:20:26.644   19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:26.644   19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:20:26.644   19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:20:26.644   19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0
00:20:26.644   19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:26.644   19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:20:26.644   19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:20:26.644   19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:20:26.644   19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:26.644   19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:26.644   19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:26.644   19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:26.644   19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:26.644   19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:26.644   19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:26.644   19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:27.210  
00:20:27.210    19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:27.210    19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:27.210    19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:27.469   19:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:27.469    19:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:27.469    19:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:27.469    19:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:27.469    19:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:27.469   19:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:27.469  {
00:20:27.469  "auth": {
00:20:27.469  "dhgroup": "ffdhe6144",
00:20:27.469  "digest": "sha512",
00:20:27.469  "state": "completed"
00:20:27.469  },
00:20:27.469  "cntlid": 129,
00:20:27.469  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:27.469  "listen_address": {
00:20:27.469  "adrfam": "IPv4",
00:20:27.469  "traddr": "10.0.0.3",
00:20:27.469  "trsvcid": "4420",
00:20:27.469  "trtype": "TCP"
00:20:27.469  },
00:20:27.469  "peer_address": {
00:20:27.469  "adrfam": "IPv4",
00:20:27.469  "traddr": "10.0.0.1",
00:20:27.469  "trsvcid": "39990",
00:20:27.469  "trtype": "TCP"
00:20:27.469  },
00:20:27.469  "qid": 0,
00:20:27.469  "state": "enabled",
00:20:27.469  "thread": "nvmf_tgt_poll_group_000"
00:20:27.469  }
00:20:27.469  ]'
00:20:27.469    19:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:27.469   19:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:20:27.469    19:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:27.469   19:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:20:27.469    19:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:27.728   19:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:27.728   19:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:27.728   19:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:27.728   19:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:20:27.728   19:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:20:28.662   19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:28.662  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:28.662   19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:28.662   19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:28.662   19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:28.662   19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:28.662   19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:28.662   19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:20:28.662   19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:20:28.662   19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1
00:20:28.662   19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:28.662   19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:20:28.662   19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:20:28.662   19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:20:28.662   19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:28.662   19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:28.662   19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:28.662   19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:28.921   19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:28.921   19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:28.921   19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:28.921   19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:29.179  
00:20:29.179    19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:29.179    19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:29.179    19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:29.438   19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:29.438    19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:29.438    19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:29.438    19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:29.696    19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:29.696   19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:29.696  {
00:20:29.696  "auth": {
00:20:29.696  "dhgroup": "ffdhe6144",
00:20:29.696  "digest": "sha512",
00:20:29.696  "state": "completed"
00:20:29.696  },
00:20:29.696  "cntlid": 131,
00:20:29.696  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:29.696  "listen_address": {
00:20:29.696  "adrfam": "IPv4",
00:20:29.696  "traddr": "10.0.0.3",
00:20:29.696  "trsvcid": "4420",
00:20:29.696  "trtype": "TCP"
00:20:29.696  },
00:20:29.696  "peer_address": {
00:20:29.696  "adrfam": "IPv4",
00:20:29.696  "traddr": "10.0.0.1",
00:20:29.696  "trsvcid": "40030",
00:20:29.696  "trtype": "TCP"
00:20:29.696  },
00:20:29.696  "qid": 0,
00:20:29.696  "state": "enabled",
00:20:29.696  "thread": "nvmf_tgt_poll_group_000"
00:20:29.696  }
00:20:29.696  ]'
00:20:29.696    19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:29.696   19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:20:29.696    19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:29.696   19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:20:29.696    19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:29.696   19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:29.696   19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:29.696   19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:29.955   19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:20:29.955   19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:20:30.522   19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:30.522  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:30.522   19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:30.522   19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:30.522   19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:30.522   19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:30.522   19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:30.522   19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:20:30.522   19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:20:30.780   19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2
00:20:30.780   19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:30.780   19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:20:30.780   19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:20:30.780   19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:20:30.780   19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:30.780   19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:30.780   19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:30.780   19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:30.780   19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:30.780   19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:30.780   19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:30.780   19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:31.347  
00:20:31.347    19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:31.347    19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:31.347    19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:31.605   19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:31.605    19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:31.605    19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:31.605    19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:31.605    19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:31.605   19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:31.605  {
00:20:31.605  "auth": {
00:20:31.605  "dhgroup": "ffdhe6144",
00:20:31.605  "digest": "sha512",
00:20:31.605  "state": "completed"
00:20:31.605  },
00:20:31.605  "cntlid": 133,
00:20:31.605  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:31.605  "listen_address": {
00:20:31.605  "adrfam": "IPv4",
00:20:31.605  "traddr": "10.0.0.3",
00:20:31.605  "trsvcid": "4420",
00:20:31.605  "trtype": "TCP"
00:20:31.605  },
00:20:31.605  "peer_address": {
00:20:31.605  "adrfam": "IPv4",
00:20:31.605  "traddr": "10.0.0.1",
00:20:31.605  "trsvcid": "40060",
00:20:31.605  "trtype": "TCP"
00:20:31.605  },
00:20:31.605  "qid": 0,
00:20:31.605  "state": "enabled",
00:20:31.605  "thread": "nvmf_tgt_poll_group_000"
00:20:31.605  }
00:20:31.605  ]'
00:20:31.605    19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:31.605   19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:20:31.605    19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:31.605   19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:20:31.605    19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:31.605   19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:31.605   19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:31.605   19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:32.171   19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:20:32.171   19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:20:32.737   19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:32.737  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:32.737   19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:32.737   19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:32.737   19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:32.737   19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:32.737   19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:32.737   19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:20:32.737   19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:20:32.996   19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3
00:20:32.996   19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:32.996   19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:20:32.996   19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:20:32.996   19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:20:32.996   19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:32.996   19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key3
00:20:32.996   19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:32.996   19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:32.996   19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:32.996   19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:20:32.996   19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:32.996   19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:33.254  
00:20:33.254    19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:33.254    19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:33.254    19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:33.820   19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:33.820    19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:33.820    19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:33.820    19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:33.820    19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:33.820   19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:33.820  {
00:20:33.820  "auth": {
00:20:33.820  "dhgroup": "ffdhe6144",
00:20:33.820  "digest": "sha512",
00:20:33.820  "state": "completed"
00:20:33.820  },
00:20:33.820  "cntlid": 135,
00:20:33.820  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:33.820  "listen_address": {
00:20:33.820  "adrfam": "IPv4",
00:20:33.820  "traddr": "10.0.0.3",
00:20:33.820  "trsvcid": "4420",
00:20:33.820  "trtype": "TCP"
00:20:33.820  },
00:20:33.820  "peer_address": {
00:20:33.820  "adrfam": "IPv4",
00:20:33.820  "traddr": "10.0.0.1",
00:20:33.820  "trsvcid": "40084",
00:20:33.820  "trtype": "TCP"
00:20:33.820  },
00:20:33.820  "qid": 0,
00:20:33.820  "state": "enabled",
00:20:33.820  "thread": "nvmf_tgt_poll_group_000"
00:20:33.820  }
00:20:33.820  ]'
00:20:33.820    19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:33.820   19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:20:33.820    19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:33.820   19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:20:33.820    19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:33.820   19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:33.820   19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:33.820   19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:34.078   19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:20:34.078   19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:20:34.645   19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:34.645  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:34.645   19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:34.645   19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:34.645   19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:34.645   19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:34.645   19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:20:34.645   19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:34.645   19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:20:34.645   19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:20:34.904   19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0
00:20:34.904   19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:34.904   19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:20:34.904   19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:20:34.904   19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:20:34.904   19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:34.904   19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:34.904   19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:34.904   19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:34.904   19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:34.904   19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:34.904   19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:34.904   19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:35.471  
00:20:35.471    19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:35.471    19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:35.471    19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:35.729   19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:35.729    19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:35.729    19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:35.729    19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:35.729    19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:35.729   19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:35.729  {
00:20:35.729  "auth": {
00:20:35.729  "dhgroup": "ffdhe8192",
00:20:35.729  "digest": "sha512",
00:20:35.729  "state": "completed"
00:20:35.729  },
00:20:35.729  "cntlid": 137,
00:20:35.729  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:35.729  "listen_address": {
00:20:35.729  "adrfam": "IPv4",
00:20:35.729  "traddr": "10.0.0.3",
00:20:35.729  "trsvcid": "4420",
00:20:35.729  "trtype": "TCP"
00:20:35.729  },
00:20:35.729  "peer_address": {
00:20:35.729  "adrfam": "IPv4",
00:20:35.729  "traddr": "10.0.0.1",
00:20:35.729  "trsvcid": "40106",
00:20:35.729  "trtype": "TCP"
00:20:35.729  },
00:20:35.729  "qid": 0,
00:20:35.729  "state": "enabled",
00:20:35.729  "thread": "nvmf_tgt_poll_group_000"
00:20:35.729  }
00:20:35.729  ]'
00:20:35.729    19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:35.729   19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:20:35.729    19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:35.729   19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:20:35.729    19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:35.987   19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:35.987   19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:35.987   19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:36.245   19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:20:36.245   19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:20:36.811   19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:36.811  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:36.811   19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:36.811   19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:36.811   19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:36.811   19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:36.811   19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:36.811   19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:20:36.811   19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:20:37.069   19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1
00:20:37.069   19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:37.069   19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:20:37.069   19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:20:37.069   19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:20:37.069   19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:37.069   19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:37.069   19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:37.069   19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:37.069   19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:37.069   19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:37.069   19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:37.069   19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:37.635  
00:20:37.635    19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:37.635    19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:37.635    19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:37.893   19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:37.893    19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:37.893    19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:37.893    19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:37.893    19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:37.893   19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:37.893  {
00:20:37.893  "auth": {
00:20:37.893  "dhgroup": "ffdhe8192",
00:20:37.893  "digest": "sha512",
00:20:37.893  "state": "completed"
00:20:37.893  },
00:20:37.893  "cntlid": 139,
00:20:37.893  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:37.893  "listen_address": {
00:20:37.893  "adrfam": "IPv4",
00:20:37.893  "traddr": "10.0.0.3",
00:20:37.893  "trsvcid": "4420",
00:20:37.893  "trtype": "TCP"
00:20:37.893  },
00:20:37.893  "peer_address": {
00:20:37.893  "adrfam": "IPv4",
00:20:37.893  "traddr": "10.0.0.1",
00:20:37.893  "trsvcid": "50550",
00:20:37.893  "trtype": "TCP"
00:20:37.893  },
00:20:37.893  "qid": 0,
00:20:37.893  "state": "enabled",
00:20:37.893  "thread": "nvmf_tgt_poll_group_000"
00:20:37.893  }
00:20:37.893  ]'
00:20:37.893    19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:37.893   19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:20:37.893    19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:37.893   19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:20:37.893    19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:37.893   19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:37.893   19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:37.893   19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:38.151   19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:20:38.152   19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: --dhchap-ctrl-secret DHHC-1:02:OGQ2YzEzMWZmNzkwZWRiNjQ3ODY5ZGVlNWEyY2ZhMjc3NDZiMjllMmQ4NTI4NmEws3c3BA==:
00:20:39.122   19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:39.122  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:39.122   19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:39.123   19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:39.123   19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:39.123   19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:39.123   19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:39.123   19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:20:39.123   19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:20:39.123   19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2
00:20:39.123   19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:39.123   19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:20:39.123   19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:20:39.123   19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:20:39.123   19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:39.123   19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:39.123   19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:39.123   19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:39.123   19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:39.123   19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:39.123   19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:39.123   19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:39.690  
00:20:39.690    19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:39.690    19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:39.690    19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:39.949   19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:39.949    19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:39.949    19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:39.949    19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:39.949    19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:39.949   19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:39.949  {
00:20:39.949  "auth": {
00:20:39.949  "dhgroup": "ffdhe8192",
00:20:39.949  "digest": "sha512",
00:20:39.949  "state": "completed"
00:20:39.949  },
00:20:39.949  "cntlid": 141,
00:20:39.949  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:39.949  "listen_address": {
00:20:39.949  "adrfam": "IPv4",
00:20:39.949  "traddr": "10.0.0.3",
00:20:39.949  "trsvcid": "4420",
00:20:39.949  "trtype": "TCP"
00:20:39.949  },
00:20:39.949  "peer_address": {
00:20:39.949  "adrfam": "IPv4",
00:20:39.949  "traddr": "10.0.0.1",
00:20:39.949  "trsvcid": "50570",
00:20:39.949  "trtype": "TCP"
00:20:39.949  },
00:20:39.949  "qid": 0,
00:20:39.949  "state": "enabled",
00:20:39.949  "thread": "nvmf_tgt_poll_group_000"
00:20:39.949  }
00:20:39.949  ]'
00:20:39.949    19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:40.208   19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:20:40.208    19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:40.208   19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:20:40.208    19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:40.208   19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:40.208   19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:40.208   19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:40.467   19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:20:40.467   19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:01:N2Q3NGYzMDMzZWMzZDg1ZjhiYjRjMzg5Mzc0MzkyZjHYVRrk:
00:20:41.034   19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:41.034  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:41.034   19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:41.034   19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:41.034   19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:41.034   19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:41.034   19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:41.034   19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:20:41.034   19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:20:41.293   19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3
00:20:41.293   19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:41.293   19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:20:41.293   19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:20:41.293   19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:20:41.293   19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:41.293   19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key3
00:20:41.293   19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:41.293   19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:41.293   19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:41.293   19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:20:41.293   19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:41.293   19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:41.860  
00:20:41.860    19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:41.860    19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:41.860    19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:42.119   19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:42.119    19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:42.119    19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:42.119    19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:42.119    19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:42.119   19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:42.119  {
00:20:42.119  "auth": {
00:20:42.119  "dhgroup": "ffdhe8192",
00:20:42.119  "digest": "sha512",
00:20:42.119  "state": "completed"
00:20:42.119  },
00:20:42.119  "cntlid": 143,
00:20:42.119  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:42.119  "listen_address": {
00:20:42.119  "adrfam": "IPv4",
00:20:42.119  "traddr": "10.0.0.3",
00:20:42.119  "trsvcid": "4420",
00:20:42.119  "trtype": "TCP"
00:20:42.119  },
00:20:42.119  "peer_address": {
00:20:42.119  "adrfam": "IPv4",
00:20:42.119  "traddr": "10.0.0.1",
00:20:42.119  "trsvcid": "50596",
00:20:42.119  "trtype": "TCP"
00:20:42.119  },
00:20:42.119  "qid": 0,
00:20:42.119  "state": "enabled",
00:20:42.119  "thread": "nvmf_tgt_poll_group_000"
00:20:42.119  }
00:20:42.119  ]'
00:20:42.119    19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:42.378   19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:20:42.379    19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:42.379   19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:20:42.379    19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:42.379   19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:42.379   19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:42.379   19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:42.637   19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:20:42.637   19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:20:43.205   19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:43.205  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:43.205   19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:43.205   19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:43.205   19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:43.205   19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:43.205    19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=,
00:20:43.205    19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512
00:20:43.205    19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=,
00:20:43.205    19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:20:43.205   19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:20:43.205   19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:20:43.472   19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0
00:20:43.472   19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:43.472   19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:20:43.472   19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:20:43.472   19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:20:43.472   19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:43.472   19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:43.472   19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:43.472   19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:43.472   19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:43.472   19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:43.472   19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:43.472   19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:44.043  
00:20:44.043    19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:44.043    19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:44.043    19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:44.301   19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:44.301    19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:44.301    19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:44.302    19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:44.302    19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:44.302   19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:44.302  {
00:20:44.302  "auth": {
00:20:44.302  "dhgroup": "ffdhe8192",
00:20:44.302  "digest": "sha512",
00:20:44.302  "state": "completed"
00:20:44.302  },
00:20:44.302  "cntlid": 145,
00:20:44.302  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:44.302  "listen_address": {
00:20:44.302  "adrfam": "IPv4",
00:20:44.302  "traddr": "10.0.0.3",
00:20:44.302  "trsvcid": "4420",
00:20:44.302  "trtype": "TCP"
00:20:44.302  },
00:20:44.302  "peer_address": {
00:20:44.302  "adrfam": "IPv4",
00:20:44.302  "traddr": "10.0.0.1",
00:20:44.302  "trsvcid": "50614",
00:20:44.302  "trtype": "TCP"
00:20:44.302  },
00:20:44.302  "qid": 0,
00:20:44.302  "state": "enabled",
00:20:44.302  "thread": "nvmf_tgt_poll_group_000"
00:20:44.302  }
00:20:44.302  ]'
00:20:44.302    19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:44.560   19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:20:44.560    19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:44.560   19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:20:44.560    19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:44.560   19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:44.560   19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:44.560   19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:44.818   19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:20:44.818   19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:00:ZGE3YjViYjg1ZTlkZDBmOWYzZDNkZjcxY2U3ZmU4YTkwN2FhODA4ZjNjYzZhYzQ26pGyPg==: --dhchap-ctrl-secret DHHC-1:03:YTMxZDZiM2E3ODg3ZjEyZDhlMzIwZDdjMzk1MDM2N2EyMzQ4ZTU3YTIyOGJiYmFjMDczZmM3MWU0YTQzMWFjYtOtav0=:
00:20:45.385   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:45.385  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:45.385   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:45.385   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:45.385   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:45.385   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:45.385   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1
00:20:45.385   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:45.385   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:45.385   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:45.385   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2
00:20:45.385   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:20:45.385   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2
00:20:45.385   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:20:45.385   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:45.385    19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:20:45.385   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:45.385   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2
00:20:45.385   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2
00:20:45.385   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2
00:20:45.952  2024/12/13 19:05:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error
00:20:45.952  request:
00:20:45.952  {
00:20:45.952    "method": "bdev_nvme_attach_controller",
00:20:45.952    "params": {
00:20:45.952      "name": "nvme0",
00:20:45.952      "trtype": "tcp",
00:20:45.952      "traddr": "10.0.0.3",
00:20:45.952      "adrfam": "ipv4",
00:20:45.952      "trsvcid": "4420",
00:20:45.952      "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:20:45.952      "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:45.952      "prchk_reftag": false,
00:20:45.952      "prchk_guard": false,
00:20:45.952      "hdgst": false,
00:20:45.952      "ddgst": false,
00:20:45.952      "dhchap_key": "key2",
00:20:45.952      "allow_unrecognized_csi": false
00:20:45.952    }
00:20:45.952  }
00:20:45.952  Got JSON-RPC error response
00:20:45.952  GoRPCClient: error on JSON-RPC call
00:20:45.952   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:20:45.952   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:45.952   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:45.952   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:45.952   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:45.952   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:45.952   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:45.952   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:45.952   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:45.952   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:45.952   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:45.952   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:45.952   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:20:45.952   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:20:45.952   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:20:45.952   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:20:45.952   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:45.952    19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:20:45.952   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:45.952   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:20:45.952   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:20:45.952   19:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:20:46.519  2024/12/13 19:05:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error
00:20:46.519  request:
00:20:46.519  {
00:20:46.520    "method": "bdev_nvme_attach_controller",
00:20:46.520    "params": {
00:20:46.520      "name": "nvme0",
00:20:46.520      "trtype": "tcp",
00:20:46.520      "traddr": "10.0.0.3",
00:20:46.520      "adrfam": "ipv4",
00:20:46.520      "trsvcid": "4420",
00:20:46.520      "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:20:46.520      "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:46.520      "prchk_reftag": false,
00:20:46.520      "prchk_guard": false,
00:20:46.520      "hdgst": false,
00:20:46.520      "ddgst": false,
00:20:46.520      "dhchap_key": "key1",
00:20:46.520      "dhchap_ctrlr_key": "ckey2",
00:20:46.520      "allow_unrecognized_csi": false
00:20:46.520    }
00:20:46.520  }
00:20:46.520  Got JSON-RPC error response
00:20:46.520  GoRPCClient: error on JSON-RPC call
00:20:46.520   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:20:46.520   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:46.520   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:46.520   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:46.520   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:46.520   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:46.520   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:46.520   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:46.520   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1
00:20:46.520   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:46.520   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:46.520   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:46.520   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:46.520   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:20:46.520   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:46.520   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:20:46.520   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:46.520    19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:20:46.520   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:46.520   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:46.520   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:46.520   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:47.087  2024/12/13 19:05:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error
00:20:47.087  request:
00:20:47.087  {
00:20:47.087    "method": "bdev_nvme_attach_controller",
00:20:47.087    "params": {
00:20:47.087      "name": "nvme0",
00:20:47.087      "trtype": "tcp",
00:20:47.087      "traddr": "10.0.0.3",
00:20:47.087      "adrfam": "ipv4",
00:20:47.087      "trsvcid": "4420",
00:20:47.087      "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:20:47.087      "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:47.087      "prchk_reftag": false,
00:20:47.087      "prchk_guard": false,
00:20:47.087      "hdgst": false,
00:20:47.087      "ddgst": false,
00:20:47.087      "dhchap_key": "key1",
00:20:47.087      "dhchap_ctrlr_key": "ckey1",
00:20:47.087      "allow_unrecognized_csi": false
00:20:47.087    }
00:20:47.087  }
00:20:47.087  Got JSON-RPC error response
00:20:47.087  GoRPCClient: error on JSON-RPC call
00:20:47.087   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:20:47.087   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:47.087   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:47.088   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:47.088   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:47.088   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:47.088   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:47.088   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:47.088   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 95609
00:20:47.088   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 95609 ']'
00:20:47.088   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 95609
00:20:47.088    19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname
00:20:47.088   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:47.088    19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95609
00:20:47.088   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:20:47.088   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:20:47.088  killing process with pid 95609
00:20:47.088   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95609'
00:20:47.088   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 95609
00:20:47.088   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 95609
00:20:47.347   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth
00:20:47.347   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:20:47.347   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable
00:20:47.347   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:47.347   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=100390
00:20:47.347   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 100390
00:20:47.347   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 100390 ']'
00:20:47.347   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:47.347   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:47.347   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:47.347   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:47.347   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:47.347   19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth
00:20:48.283   19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:48.283   19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0
00:20:48.283   19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:20:48.283   19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable
00:20:48.283   19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:48.283   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:20:48.283   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT
00:20:48.283   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 100390
00:20:48.283   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 100390 ']'
00:20:48.283   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:48.283   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:48.283  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:48.283   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:48.283   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:48.283   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:48.543   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:48.543   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0
00:20:48.543   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd
00:20:48.543   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:48.543   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:48.807  null0
00:20:48.807   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:48.807   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}"
00:20:48.807   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.FVs
00:20:48.807   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:48.807   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:48.807   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:48.807   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.R7p ]]
00:20:48.807   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.R7p
00:20:48.807   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:48.807   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}"
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.JCe
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.OAp ]]
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OAp
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}"
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Pq9
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.viX ]]
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.viX
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}"
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Zal
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]]
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key3
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:48.808   19:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:49.745  nvme0n1
00:20:49.745    19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:49.745    19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:49.745    19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:50.004   19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:50.004    19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:50.004    19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:50.004    19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:50.004    19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:50.004   19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:50.004  {
00:20:50.004  "auth": {
00:20:50.004  "dhgroup": "ffdhe8192",
00:20:50.004  "digest": "sha512",
00:20:50.004  "state": "completed"
00:20:50.004  },
00:20:50.004  "cntlid": 1,
00:20:50.004  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:50.004  "listen_address": {
00:20:50.004  "adrfam": "IPv4",
00:20:50.004  "traddr": "10.0.0.3",
00:20:50.004  "trsvcid": "4420",
00:20:50.004  "trtype": "TCP"
00:20:50.004  },
00:20:50.004  "peer_address": {
00:20:50.004  "adrfam": "IPv4",
00:20:50.004  "traddr": "10.0.0.1",
00:20:50.004  "trsvcid": "40928",
00:20:50.004  "trtype": "TCP"
00:20:50.004  },
00:20:50.004  "qid": 0,
00:20:50.004  "state": "enabled",
00:20:50.004  "thread": "nvmf_tgt_poll_group_000"
00:20:50.004  }
00:20:50.004  ]'
00:20:50.005    19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:50.005   19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:20:50.005    19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:50.005   19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:20:50.005    19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:50.005   19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:50.005   19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:50.005   19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:50.263   19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:20:50.264   19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:20:51.201   19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:51.201  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:51.201   19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:51.201   19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:51.201   19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:51.201   19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:51.201   19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key3
00:20:51.201   19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:51.201   19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:51.201   19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:51.201   19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256
00:20:51.201   19:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256
00:20:51.201   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3
00:20:51.201   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:20:51.201   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3
00:20:51.201   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:20:51.201   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:51.201    19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:20:51.201   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:51.201   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3
00:20:51.201   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:51.201   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:51.613  2024/12/13 19:05:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error
00:20:51.613  request:
00:20:51.613  {
00:20:51.613    "method": "bdev_nvme_attach_controller",
00:20:51.613    "params": {
00:20:51.613      "name": "nvme0",
00:20:51.613      "trtype": "tcp",
00:20:51.613      "traddr": "10.0.0.3",
00:20:51.613      "adrfam": "ipv4",
00:20:51.613      "trsvcid": "4420",
00:20:51.613      "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:20:51.613      "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:51.613      "prchk_reftag": false,
00:20:51.613      "prchk_guard": false,
00:20:51.613      "hdgst": false,
00:20:51.613      "ddgst": false,
00:20:51.613      "dhchap_key": "key3",
00:20:51.613      "allow_unrecognized_csi": false
00:20:51.613    }
00:20:51.613  }
00:20:51.613  Got JSON-RPC error response
00:20:51.613  GoRPCClient: error on JSON-RPC call
00:20:51.613   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:20:51.613   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:51.613   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:51.613   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:51.613    19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=,
00:20:51.613    19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512
00:20:51.613   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512
00:20:51.613   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512
00:20:51.872   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3
00:20:51.872   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:20:51.872   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3
00:20:51.872   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:20:51.872   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:51.872    19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:20:51.872   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:51.872   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3
00:20:51.872   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:51.872   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:52.131  2024/12/13 19:05:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error
00:20:52.131  request:
00:20:52.131  {
00:20:52.131    "method": "bdev_nvme_attach_controller",
00:20:52.131    "params": {
00:20:52.131      "name": "nvme0",
00:20:52.131      "trtype": "tcp",
00:20:52.131      "traddr": "10.0.0.3",
00:20:52.131      "adrfam": "ipv4",
00:20:52.131      "trsvcid": "4420",
00:20:52.131      "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:20:52.131      "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:52.131      "prchk_reftag": false,
00:20:52.131      "prchk_guard": false,
00:20:52.131      "hdgst": false,
00:20:52.131      "ddgst": false,
00:20:52.131      "dhchap_key": "key3",
00:20:52.131      "allow_unrecognized_csi": false
00:20:52.131    }
00:20:52.131  }
00:20:52.131  Got JSON-RPC error response
00:20:52.131  GoRPCClient: error on JSON-RPC call
00:20:52.131   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:20:52.131   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:52.131   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:52.131   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:52.131    19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=,
00:20:52.131    19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512
00:20:52.131    19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=,
00:20:52.131    19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:20:52.131   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:20:52.131   19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:20:52.389   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:52.389   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:52.389   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:52.389   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:52.389   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:52.389   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:52.389   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:52.389   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:52.389   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1
00:20:52.389   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:20:52.389   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1
00:20:52.389   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:20:52.389   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:52.389    19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:20:52.389   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:52.389   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1
00:20:52.389   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1
00:20:52.389   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1
00:20:52.955  2024/12/13 19:05:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error
00:20:52.955  request:
00:20:52.955  {
00:20:52.955    "method": "bdev_nvme_attach_controller",
00:20:52.955    "params": {
00:20:52.955      "name": "nvme0",
00:20:52.955      "trtype": "tcp",
00:20:52.955      "traddr": "10.0.0.3",
00:20:52.955      "adrfam": "ipv4",
00:20:52.955      "trsvcid": "4420",
00:20:52.955      "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:20:52.955      "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:52.955      "prchk_reftag": false,
00:20:52.955      "prchk_guard": false,
00:20:52.955      "hdgst": false,
00:20:52.955      "ddgst": false,
00:20:52.955      "dhchap_key": "key0",
00:20:52.955      "dhchap_ctrlr_key": "key1",
00:20:52.955      "allow_unrecognized_csi": false
00:20:52.955    }
00:20:52.955  }
00:20:52.955  Got JSON-RPC error response
00:20:52.955  GoRPCClient: error on JSON-RPC call
00:20:52.955   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:20:52.955   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:52.955   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:52.955   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:52.955   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0
00:20:52.955   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0
00:20:52.955   19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0
00:20:53.212  nvme0n1
00:20:53.212    19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name'
00:20:53.213    19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers
00:20:53.213    19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:53.472   19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:53.472   19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:53.472   19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:53.731   19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1
00:20:53.731   19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:53.731   19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:53.731   19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:53.731   19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1
00:20:53.731   19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1
00:20:53.731   19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1
00:20:54.666  nvme0n1
00:20:54.666    19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name'
00:20:54.666    19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers
00:20:54.666    19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:54.666   19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:54.666   19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key2 --dhchap-ctrlr-key key3
00:20:54.666   19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:54.666   19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:54.666   19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:54.666    19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers
00:20:54.666    19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name'
00:20:54.666    19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:55.233   19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:55.233   19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:20:55.233   19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid 8f716199-a3ae-4f70-9a3a-0556e5b7497a -l 0 --dhchap-secret DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: --dhchap-ctrl-secret DHHC-1:03:YjJkMDI3MTYwMzk0ZjM0Mzg1YTMxYTUxZDM0MmJmM2VkZmM2MWE2NzAyODM5NTY3NDM3YzQ3YWUzYjc2ZjJkOMNOdLE=:
00:20:55.800    19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr
00:20:55.800    19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev
00:20:55.800    19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme*
00:20:55.800    19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]]
00:20:55.800    19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0
00:20:55.800    19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break
00:20:55.800   19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0
00:20:55.800   19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:55.800   19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:56.059   19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1
00:20:56.059   19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:20:56.059   19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1
00:20:56.059   19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:20:56.059   19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:56.059    19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:20:56.059   19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:56.059   19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1
00:20:56.059   19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1
00:20:56.059   19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1
00:20:56.626  2024/12/13 19:05:28 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error
00:20:56.626  request:
00:20:56.626  {
00:20:56.626    "method": "bdev_nvme_attach_controller",
00:20:56.626    "params": {
00:20:56.626      "name": "nvme0",
00:20:56.626      "trtype": "tcp",
00:20:56.626      "traddr": "10.0.0.3",
00:20:56.626      "adrfam": "ipv4",
00:20:56.626      "trsvcid": "4420",
00:20:56.626      "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:20:56.626      "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a",
00:20:56.626      "prchk_reftag": false,
00:20:56.626      "prchk_guard": false,
00:20:56.626      "hdgst": false,
00:20:56.626      "ddgst": false,
00:20:56.626      "dhchap_key": "key1",
00:20:56.626      "allow_unrecognized_csi": false
00:20:56.626    }
00:20:56.626  }
00:20:56.626  Got JSON-RPC error response
00:20:56.626  GoRPCClient: error on JSON-RPC call
00:20:56.626   19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:20:56.626   19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:56.626   19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:56.626   19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:56.626   19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3
00:20:56.626   19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3
00:20:56.627   19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3
00:20:57.563  nvme0n1
00:20:57.563    19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers
00:20:57.563    19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:57.563    19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name'
00:20:57.822   19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:57.822   19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:57.822   19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:58.081   19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:20:58.081   19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:58.081   19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:58.081   19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:58.082   19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0
00:20:58.082   19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0
00:20:58.082   19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0
00:20:58.340  nvme0n1
00:20:58.340    19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers
00:20:58.340    19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:58.340    19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name'
00:20:58.908   19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:58.908   19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:58.908   19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:58.908   19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1 --dhchap-ctrlr-key key3
00:20:58.908   19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:58.908   19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:58.908   19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:58.908   19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: '' 2s
00:20:58.908   19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout
00:20:58.908   19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0
00:20:58.908   19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM:
00:20:58.908   19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=
00:20:58.908   19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s
00:20:58.908   19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0
00:20:58.908   19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM: ]]
00:20:58.908   19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MzFjY2Q5MDJmZTQ0MTc2NmVjNmNlNDg4ODFmODJiZDkwdlvM:
00:20:58.908   19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]]
00:20:58.908   19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]]
00:20:58.908   19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s
00:21:01.453   19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1
00:21:01.453   19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0
00:21:01.453   19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:21:01.453   19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1
00:21:01.453   19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:21:01.453   19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1
00:21:01.453   19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0
00:21:01.453   19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key1 --dhchap-ctrlr-key key2
00:21:01.453   19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:01.453   19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:01.453   19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:01.453   19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: 2s
00:21:01.453   19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout
00:21:01.453   19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0
00:21:01.454   19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=
00:21:01.454   19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==:
00:21:01.454   19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s
00:21:01.454   19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0
00:21:01.454   19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]]
00:21:01.454   19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==: ]]
00:21:01.454   19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MzE5ZDMyY2Y1NTY0MGI2Nzk5ZWVjMjBiYmYyODllZWZkNTc1YTkzMGEwZWJkZDQ1YLrcsA==:
00:21:01.454   19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]]
00:21:01.454   19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s
00:21:03.363   19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1
00:21:03.363   19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0
00:21:03.363   19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:21:03.363   19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1
00:21:03.363   19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:21:03.363   19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1
00:21:03.363   19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0
00:21:03.363   19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:03.363  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:03.363   19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key0 --dhchap-ctrlr-key key1
00:21:03.363   19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:03.363   19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:03.363   19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:03.363   19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:21:03.363   19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:21:03.363   19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:21:03.933  nvme0n1
00:21:03.933   19:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key2 --dhchap-ctrlr-key key3
00:21:03.933   19:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:03.933   19:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:03.933   19:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:03.933   19:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3
00:21:03.933   19:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3
00:21:04.871    19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers
00:21:04.871    19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:04.871    19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name'
00:21:04.871   19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:04.871   19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:21:04.871   19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:04.871   19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:04.871   19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:04.871   19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0
00:21:04.871   19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0
00:21:05.130    19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name'
00:21:05.130    19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers
00:21:05.130    19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:05.390   19:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:05.390   19:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key2 --dhchap-ctrlr-key key3
00:21:05.390   19:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:05.390   19:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:05.390   19:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:05.390   19:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3
00:21:05.390   19:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:21:05.390   19:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3
00:21:05.390   19:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc
00:21:05.390   19:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:05.390    19:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc
00:21:05.390   19:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:05.390   19:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3
00:21:05.390   19:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3
00:21:06.329  2024/12/13 19:05:37 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied
00:21:06.329  request:
00:21:06.329  {
00:21:06.329    "method": "bdev_nvme_set_keys",
00:21:06.329    "params": {
00:21:06.329      "name": "nvme0",
00:21:06.329      "dhchap_key": "key1",
00:21:06.329      "dhchap_ctrlr_key": "key3"
00:21:06.329    }
00:21:06.329  }
00:21:06.329  Got JSON-RPC error response
00:21:06.329  GoRPCClient: error on JSON-RPC call
00:21:06.329   19:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:21:06.329   19:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:21:06.329   19:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:21:06.329   19:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:21:06.329    19:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers
00:21:06.329    19:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:06.329    19:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length
00:21:06.329   19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 ))
00:21:06.329   19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s
00:21:07.267    19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers
00:21:07.267    19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:07.267    19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length
00:21:07.836   19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 ))
00:21:07.836   19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key0 --dhchap-ctrlr-key key1
00:21:07.836   19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:07.836   19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:07.836   19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:07.836   19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:21:07.836   19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:21:07.836   19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:21:08.774  nvme0n1
00:21:08.774   19:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --dhchap-key key2 --dhchap-ctrlr-key key3
00:21:08.774   19:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:08.774   19:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:08.774   19:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:08.774   19:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0
00:21:08.774   19:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:21:08.774   19:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0
00:21:08.774   19:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc
00:21:08.774   19:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:08.774    19:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc
00:21:08.774   19:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:08.774   19:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0
00:21:08.774   19:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0
00:21:09.343  2024/12/13 19:05:40 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied
00:21:09.343  request:
00:21:09.343  {
00:21:09.343    "method": "bdev_nvme_set_keys",
00:21:09.343    "params": {
00:21:09.343      "name": "nvme0",
00:21:09.343      "dhchap_key": "key2",
00:21:09.343      "dhchap_ctrlr_key": "key0"
00:21:09.343    }
00:21:09.343  }
00:21:09.343  Got JSON-RPC error response
00:21:09.343  GoRPCClient: error on JSON-RPC call
00:21:09.343   19:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:21:09.343   19:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:21:09.343   19:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:21:09.343   19:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:21:09.343    19:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers
00:21:09.343    19:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:09.343    19:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length
00:21:09.603   19:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 ))
00:21:09.603   19:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s
00:21:10.541    19:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers
00:21:10.541    19:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:10.541    19:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length
00:21:10.800   19:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 ))
00:21:10.800   19:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT
00:21:10.800   19:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup
00:21:10.800   19:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 95634
00:21:10.800   19:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 95634 ']'
00:21:10.800   19:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 95634
00:21:10.800    19:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname
00:21:10.800   19:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:10.800    19:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95634
00:21:11.059   19:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:21:11.060   19:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:21:11.060  killing process with pid 95634
00:21:11.060   19:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95634'
00:21:11.060   19:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 95634
00:21:11.060   19:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 95634
00:21:11.319   19:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini
00:21:11.319   19:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup
00:21:11.319   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync
00:21:11.319   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:21:11.319   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e
00:21:11.319   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20}
00:21:11.319   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:21:11.319  rmmod nvme_tcp
00:21:11.319  rmmod nvme_fabrics
00:21:11.319  rmmod nvme_keyring
00:21:11.319   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:21:11.319   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e
00:21:11.319   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0
00:21:11.319   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 100390 ']'
00:21:11.319   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 100390
00:21:11.319   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 100390 ']'
00:21:11.319   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 100390
00:21:11.319    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname
00:21:11.319   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:11.319    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100390
00:21:11.578   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:21:11.578   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:21:11.578  killing process with pid 100390
00:21:11.578   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100390'
00:21:11.578   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 100390
00:21:11.578   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 100390
00:21:11.578   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:21:11.578   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:21:11.578   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:21:11.578   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr
00:21:11.578   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:21:11.578   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save
00:21:11.578   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore
00:21:11.578   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:21:11.578   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:21:11.578   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:21:11.579   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:21:11.579   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:21:11.838   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:21:11.838   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:21:11.838   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:21:11.838   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:21:11.838   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:21:11.838   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:21:11.838   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:21:11.838   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:21:11.838   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:21:11.838   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:21:11.838   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns
00:21:11.838   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:21:11.838   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:21:11.838    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:21:11.838   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0
00:21:11.838   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.FVs /tmp/spdk.key-sha256.JCe /tmp/spdk.key-sha384.Pq9 /tmp/spdk.key-sha512.Zal /tmp/spdk.key-sha512.R7p /tmp/spdk.key-sha384.OAp /tmp/spdk.key-sha256.viX '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log
00:21:11.838  
00:21:11.838  real	3m2.691s
00:21:11.838  user	7m24.321s
00:21:11.838  sys	0m22.652s
00:21:11.838   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable
00:21:11.838   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:11.838  ************************************
00:21:11.838  END TEST nvmf_auth_target
00:21:11.838  ************************************
00:21:11.838   19:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']'
00:21:11.838   19:05:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages
00:21:11.838   19:05:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:21:11.838   19:05:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:21:11.838   19:05:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:21:12.099  ************************************
00:21:12.099  START TEST nvmf_bdevio_no_huge
00:21:12.099  ************************************
00:21:12.099   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages
00:21:12.099  * Looking for test storage...
00:21:12.099  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:21:12.099     19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version
00:21:12.099     19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-:
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-:
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<'
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 ))
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:12.099     19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1
00:21:12.099     19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1
00:21:12.099     19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:12.099     19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1
00:21:12.099     19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2
00:21:12.099     19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2
00:21:12.099     19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:12.099     19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:21:12.099  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:12.099  		--rc genhtml_branch_coverage=1
00:21:12.099  		--rc genhtml_function_coverage=1
00:21:12.099  		--rc genhtml_legend=1
00:21:12.099  		--rc geninfo_all_blocks=1
00:21:12.099  		--rc geninfo_unexecuted_blocks=1
00:21:12.099  		
00:21:12.099  		'
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:21:12.099  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:12.099  		--rc genhtml_branch_coverage=1
00:21:12.099  		--rc genhtml_function_coverage=1
00:21:12.099  		--rc genhtml_legend=1
00:21:12.099  		--rc geninfo_all_blocks=1
00:21:12.099  		--rc geninfo_unexecuted_blocks=1
00:21:12.099  		
00:21:12.099  		'
00:21:12.099    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:21:12.099  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:12.099  		--rc genhtml_branch_coverage=1
00:21:12.099  		--rc genhtml_function_coverage=1
00:21:12.099  		--rc genhtml_legend=1
00:21:12.100  		--rc geninfo_all_blocks=1
00:21:12.100  		--rc geninfo_unexecuted_blocks=1
00:21:12.100  		
00:21:12.100  		'
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:21:12.100  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:12.100  		--rc genhtml_branch_coverage=1
00:21:12.100  		--rc genhtml_function_coverage=1
00:21:12.100  		--rc genhtml_legend=1
00:21:12.100  		--rc geninfo_all_blocks=1
00:21:12.100  		--rc geninfo_unexecuted_blocks=1
00:21:12.100  		
00:21:12.100  		'
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:21:12.100     19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:21:12.100     19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:21:12.100     19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob
00:21:12.100     19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:21:12.100     19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:21:12.100     19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:21:12.100      19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:12.100      19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:12.100      19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:12.100      19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH
00:21:12.100      19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:21:12.100  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:21:12.100    19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:21:12.100   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:21:12.101   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:21:12.101   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:21:12.101   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:21:12.101   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:21:12.101   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:21:12.101   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:21:12.101   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:21:12.101  Cannot find device "nvmf_init_br"
00:21:12.101   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true
00:21:12.101   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:21:12.101  Cannot find device "nvmf_init_br2"
00:21:12.101   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true
00:21:12.101   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:21:12.360  Cannot find device "nvmf_tgt_br"
00:21:12.360   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true
00:21:12.360   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:21:12.360  Cannot find device "nvmf_tgt_br2"
00:21:12.360   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true
00:21:12.360   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:21:12.360  Cannot find device "nvmf_init_br"
00:21:12.360   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true
00:21:12.360   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:21:12.360  Cannot find device "nvmf_init_br2"
00:21:12.360   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true
00:21:12.360   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:21:12.360  Cannot find device "nvmf_tgt_br"
00:21:12.360   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true
00:21:12.360   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:21:12.360  Cannot find device "nvmf_tgt_br2"
00:21:12.360   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true
00:21:12.360   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:21:12.360  Cannot find device "nvmf_br"
00:21:12.360   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true
00:21:12.360   19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:21:12.360  Cannot find device "nvmf_init_if"
00:21:12.360   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true
00:21:12.360   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:21:12.360  Cannot find device "nvmf_init_if2"
00:21:12.360   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true
00:21:12.360   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:21:12.360  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:21:12.360   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true
00:21:12.360   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:21:12.360  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:21:12.360   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true
00:21:12.360   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:21:12.360   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:21:12.360   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:21:12.360   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:21:12.360   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:21:12.360   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:21:12.360   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:21:12.360   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:21:12.360   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:21:12.360   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:21:12.360   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:21:12.361   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:21:12.361   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:21:12.361   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:21:12.361   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:21:12.361   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:21:12.361   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:21:12.361   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:21:12.361   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:21:12.619   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:21:12.620  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:21:12.620  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms
00:21:12.620  
00:21:12.620  --- 10.0.0.3 ping statistics ---
00:21:12.620  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:12.620  rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:21:12.620  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:21:12.620  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms
00:21:12.620  
00:21:12.620  --- 10.0.0.4 ping statistics ---
00:21:12.620  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:12.620  rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:21:12.620  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:21:12.620  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms
00:21:12.620  
00:21:12.620  --- 10.0.0.1 ping statistics ---
00:21:12.620  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:12.620  rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:21:12.620  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:21:12.620  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms
00:21:12.620  
00:21:12.620  --- 10.0.0.2 ping statistics ---
00:21:12.620  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:12.620  rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=101249
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 101249
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 101249 ']'
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:21:12.620  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable
00:21:12.620   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:21:12.620  [2024-12-13 19:05:44.378406] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:21:12.620  [2024-12-13 19:05:44.378527] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ]
00:21:12.879  [2024-12-13 19:05:44.542907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:21:12.879  [2024-12-13 19:05:44.607140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:21:12.879  [2024-12-13 19:05:44.607203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:21:12.879  [2024-12-13 19:05:44.607231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:21:12.879  [2024-12-13 19:05:44.607245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:21:12.879  [2024-12-13 19:05:44.607254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:21:12.879  [2024-12-13 19:05:44.608268] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4
00:21:12.879  [2024-12-13 19:05:44.608441] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5
00:21:12.879  [2024-12-13 19:05:44.608812] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6
00:21:12.879  [2024-12-13 19:05:44.608993] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:21:13.139  [2024-12-13 19:05:44.826819] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:21:13.139  Malloc0
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:21:13.139  [2024-12-13 19:05:44.867116] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:13.139   19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024
00:21:13.139    19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json
00:21:13.139    19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=()
00:21:13.139    19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config
00:21:13.139    19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:21:13.139    19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:21:13.139  {
00:21:13.139    "params": {
00:21:13.139      "name": "Nvme$subsystem",
00:21:13.139      "trtype": "$TEST_TRANSPORT",
00:21:13.139      "traddr": "$NVMF_FIRST_TARGET_IP",
00:21:13.139      "adrfam": "ipv4",
00:21:13.139      "trsvcid": "$NVMF_PORT",
00:21:13.139      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:21:13.139      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:21:13.139      "hdgst": ${hdgst:-false},
00:21:13.139      "ddgst": ${ddgst:-false}
00:21:13.139    },
00:21:13.139    "method": "bdev_nvme_attach_controller"
00:21:13.139  }
00:21:13.139  EOF
00:21:13.139  )")
00:21:13.139     19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat
00:21:13.139    19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq .
00:21:13.139     19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=,
00:21:13.139     19:05:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:21:13.139    "params": {
00:21:13.139      "name": "Nvme1",
00:21:13.139      "trtype": "tcp",
00:21:13.139      "traddr": "10.0.0.3",
00:21:13.139      "adrfam": "ipv4",
00:21:13.139      "trsvcid": "4420",
00:21:13.139      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:21:13.139      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:21:13.139      "hdgst": false,
00:21:13.139      "ddgst": false
00:21:13.139    },
00:21:13.139    "method": "bdev_nvme_attach_controller"
00:21:13.139  }'
00:21:13.139  [2024-12-13 19:05:44.934423] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:21:13.139  [2024-12-13 19:05:44.934528] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid101289 ]
00:21:13.400  [2024-12-13 19:05:45.094543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:21:13.400  [2024-12-13 19:05:45.177525] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:21:13.400  [2024-12-13 19:05:45.177659] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:21:13.400  [2024-12-13 19:05:45.177679] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:21:13.659  I/O targets:
00:21:13.659    Nvme1n1: 131072 blocks of 512 bytes (64 MiB)
00:21:13.659  
00:21:13.659  
00:21:13.659       CUnit - A unit testing framework for C - Version 2.1-3
00:21:13.659       http://cunit.sourceforge.net/
00:21:13.659  
00:21:13.659  
00:21:13.659  Suite: bdevio tests on: Nvme1n1
00:21:13.659    Test: blockdev write read block ...passed
00:21:13.919    Test: blockdev write zeroes read block ...passed
00:21:13.919    Test: blockdev write zeroes read no split ...passed
00:21:13.919    Test: blockdev write zeroes read split ...passed
00:21:13.919    Test: blockdev write zeroes read split partial ...passed
00:21:13.919    Test: blockdev reset ...[2024-12-13 19:05:45.553480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller
00:21:13.919  [2024-12-13 19:05:45.553593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b78c60 (9): Bad file descriptor
00:21:13.919  [2024-12-13 19:05:45.569714] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful.
00:21:13.919  passed
00:21:13.919    Test: blockdev write read 8 blocks ...passed
00:21:13.919    Test: blockdev write read size > 128k ...passed
00:21:13.919    Test: blockdev write read invalid size ...passed
00:21:13.919    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:21:13.919    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:21:13.919    Test: blockdev write read max offset ...passed
00:21:13.919    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:21:13.919    Test: blockdev writev readv 8 blocks ...passed
00:21:13.919    Test: blockdev writev readv 30 x 1block ...passed
00:21:13.919    Test: blockdev writev readv block ...passed
00:21:13.919    Test: blockdev writev readv size > 128k ...passed
00:21:13.919    Test: blockdev writev readv size > 128k in two iovs ...passed
00:21:13.919    Test: blockdev comparev and writev ...[2024-12-13 19:05:45.740555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:21:13.919  [2024-12-13 19:05:45.740597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:21:13.919  [2024-12-13 19:05:45.740618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:21:13.919  [2024-12-13 19:05:45.740630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:21:13.919  [2024-12-13 19:05:45.741124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:21:13.919  [2024-12-13 19:05:45.741151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:21:13.919  [2024-12-13 19:05:45.741169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:21:13.919  [2024-12-13 19:05:45.741180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:21:14.179  [2024-12-13 19:05:45.741709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:21:14.179  [2024-12-13 19:05:45.741737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:21:14.179  [2024-12-13 19:05:45.741754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:21:14.179  [2024-12-13 19:05:45.741765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:21:14.179  [2024-12-13 19:05:45.742301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:21:14.179  [2024-12-13 19:05:45.742327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:21:14.179  [2024-12-13 19:05:45.742344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:21:14.179  [2024-12-13 19:05:45.742354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:21:14.179  passed
00:21:14.179    Test: blockdev nvme passthru rw ...passed
00:21:14.179    Test: blockdev nvme passthru vendor specific ...[2024-12-13 19:05:45.825523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:21:14.179  [2024-12-13 19:05:45.825554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:21:14.179  [2024-12-13 19:05:45.825693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:21:14.179  [2024-12-13 19:05:45.825714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:21:14.179  [2024-12-13 19:05:45.825828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:21:14.179  [2024-12-13 19:05:45.825848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:21:14.179  [2024-12-13 19:05:45.825961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:21:14.179  [2024-12-13 19:05:45.825981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:21:14.179  passed
00:21:14.179    Test: blockdev nvme admin passthru ...passed
00:21:14.179    Test: blockdev copy ...passed
00:21:14.179  
00:21:14.179  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:21:14.179                suites      1      1    n/a      0        0
00:21:14.179                 tests     23     23     23      0        0
00:21:14.179               asserts    152    152    152      0      n/a
00:21:14.179  
00:21:14.179  Elapsed time =    0.913 seconds
00:21:14.438   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:21:14.438   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:14.438   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:21:14.438   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:14.438   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT
00:21:14.438   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini
00:21:14.438   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup
00:21:14.438   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync
00:21:14.438   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:21:14.438   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e
00:21:14.438   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20}
00:21:14.438   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:21:14.697  rmmod nvme_tcp
00:21:14.697  rmmod nvme_fabrics
00:21:14.697  rmmod nvme_keyring
00:21:14.697   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:21:14.697   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e
00:21:14.697   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0
00:21:14.697   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 101249 ']'
00:21:14.697   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 101249
00:21:14.697   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 101249 ']'
00:21:14.697   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 101249
00:21:14.697    19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname
00:21:14.697   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:14.697    19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101249
00:21:14.697  killing process with pid 101249
00:21:14.697   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3
00:21:14.697   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']'
00:21:14.697   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101249'
00:21:14.697   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 101249
00:21:14.697   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 101249
00:21:14.957   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:21:14.957   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:21:14.957   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:21:14.957   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr
00:21:14.957   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save
00:21:14.957   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore
00:21:14.957   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:21:14.957   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:21:14.957   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:21:14.957   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:21:14.957   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:21:14.957   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:21:14.957   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:21:14.957   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:21:14.957   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:21:14.957   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:21:14.957   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:21:14.957   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:21:15.216   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:21:15.216   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:21:15.216   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:21:15.216   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:21:15.216   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns
00:21:15.216   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:21:15.216   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:21:15.216    19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:21:15.216   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0
00:21:15.216  
00:21:15.216  real	0m3.258s
00:21:15.216  user	0m10.283s
00:21:15.216  sys	0m1.464s
00:21:15.216   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable
00:21:15.216  ************************************
00:21:15.216  END TEST nvmf_bdevio_no_huge
00:21:15.216   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:21:15.216  ************************************
00:21:15.216   19:05:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp
00:21:15.216   19:05:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:21:15.216   19:05:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:21:15.216   19:05:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:21:15.216  ************************************
00:21:15.216  START TEST nvmf_tls
00:21:15.216  ************************************
00:21:15.216   19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp
00:21:15.476  * Looking for test storage...
00:21:15.476  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:21:15.476     19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:21:15.476     19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-:
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-:
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<'
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 ))
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:15.476     19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1
00:21:15.476     19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1
00:21:15.476     19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:15.476     19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1
00:21:15.476     19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2
00:21:15.476     19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2
00:21:15.476     19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:15.476     19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:15.476    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:21:15.476  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:15.477  		--rc genhtml_branch_coverage=1
00:21:15.477  		--rc genhtml_function_coverage=1
00:21:15.477  		--rc genhtml_legend=1
00:21:15.477  		--rc geninfo_all_blocks=1
00:21:15.477  		--rc geninfo_unexecuted_blocks=1
00:21:15.477  		
00:21:15.477  		'
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:21:15.477  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:15.477  		--rc genhtml_branch_coverage=1
00:21:15.477  		--rc genhtml_function_coverage=1
00:21:15.477  		--rc genhtml_legend=1
00:21:15.477  		--rc geninfo_all_blocks=1
00:21:15.477  		--rc geninfo_unexecuted_blocks=1
00:21:15.477  		
00:21:15.477  		'
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:21:15.477  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:15.477  		--rc genhtml_branch_coverage=1
00:21:15.477  		--rc genhtml_function_coverage=1
00:21:15.477  		--rc genhtml_legend=1
00:21:15.477  		--rc geninfo_all_blocks=1
00:21:15.477  		--rc geninfo_unexecuted_blocks=1
00:21:15.477  		
00:21:15.477  		'
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:21:15.477  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:15.477  		--rc genhtml_branch_coverage=1
00:21:15.477  		--rc genhtml_function_coverage=1
00:21:15.477  		--rc genhtml_legend=1
00:21:15.477  		--rc geninfo_all_blocks=1
00:21:15.477  		--rc geninfo_unexecuted_blocks=1
00:21:15.477  		
00:21:15.477  		'
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:21:15.477     19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:21:15.477     19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:21:15.477     19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob
00:21:15.477     19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:21:15.477     19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:21:15.477     19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:21:15.477      19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:15.477      19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:15.477      19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:15.477      19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH
00:21:15.477      19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:21:15.477  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:21:15.477    19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:21:15.477   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:21:15.478   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:21:15.478   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:21:15.478   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:21:15.478   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:21:15.478  Cannot find device "nvmf_init_br"
00:21:15.478   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true
00:21:15.478   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:21:15.478  Cannot find device "nvmf_init_br2"
00:21:15.478   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true
00:21:15.478   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:21:15.478  Cannot find device "nvmf_tgt_br"
00:21:15.478   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true
00:21:15.478   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:21:15.478  Cannot find device "nvmf_tgt_br2"
00:21:15.478   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true
00:21:15.478   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:21:15.478  Cannot find device "nvmf_init_br"
00:21:15.478   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true
00:21:15.478   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:21:15.478  Cannot find device "nvmf_init_br2"
00:21:15.478   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true
00:21:15.478   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:21:15.478  Cannot find device "nvmf_tgt_br"
00:21:15.478   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true
00:21:15.478   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:21:15.478  Cannot find device "nvmf_tgt_br2"
00:21:15.478   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true
00:21:15.478   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:21:15.478  Cannot find device "nvmf_br"
00:21:15.478   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true
00:21:15.478   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:21:15.737  Cannot find device "nvmf_init_if"
00:21:15.737   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true
00:21:15.737   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:21:15.737  Cannot find device "nvmf_init_if2"
00:21:15.737   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true
00:21:15.737   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:21:15.737  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:21:15.737   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true
00:21:15.737   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:21:15.737  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:21:15.737   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true
00:21:15.737   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:21:15.737   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:21:15.737   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:21:15.737   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:21:15.737   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:21:15.737   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:21:15.737   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:21:15.737   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:21:15.737   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:21:15.737   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:21:15.737   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:21:15.737   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:21:15.737   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:21:15.738  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:21:15.738  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms
00:21:15.738  
00:21:15.738  --- 10.0.0.3 ping statistics ---
00:21:15.738  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:15.738  rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:21:15.738  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:21:15.738  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms
00:21:15.738  
00:21:15.738  --- 10.0.0.4 ping statistics ---
00:21:15.738  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:15.738  rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:21:15.738  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:21:15.738  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms
00:21:15.738  
00:21:15.738  --- 10.0.0.1 ping statistics ---
00:21:15.738  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:15.738  rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:21:15.738  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:21:15.738  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms
00:21:15.738  
00:21:15.738  --- 10.0.0.2 ping statistics ---
00:21:15.738  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:21:15.738  rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:21:15.738   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:21:15.997   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc
00:21:15.997   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:21:15.997   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:21:15.997   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:21:15.997   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=101530
00:21:15.997   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc
00:21:15.997   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 101530
00:21:15.997   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 101530 ']'
00:21:15.997   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:21:15.997   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:21:15.997  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:15.997   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:21:15.997   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:21:15.997   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:21:15.997  [2024-12-13 19:05:47.646916] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:21:15.997  [2024-12-13 19:05:47.647002] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:21:15.997  [2024-12-13 19:05:47.802149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:16.256  [2024-12-13 19:05:47.837943] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:21:16.256  [2024-12-13 19:05:47.838002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:21:16.256  [2024-12-13 19:05:47.838016] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:21:16.256  [2024-12-13 19:05:47.838027] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:21:16.256  [2024-12-13 19:05:47.838036] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:21:16.256  [2024-12-13 19:05:47.838468] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:21:16.256   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:21:16.256   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:21:16.256   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:21:16.256   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:21:16.256   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:21:16.256   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:21:16.256   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']'
00:21:16.256   19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl
00:21:16.515  true
00:21:16.515    19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:21:16.515    19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version
00:21:16.774   19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0
00:21:16.774   19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]]
00:21:16.774   19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13
00:21:17.033    19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:21:17.033    19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version
00:21:17.291   19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13
00:21:17.292   19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]]
00:21:17.292   19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7
00:21:17.551    19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:21:17.551    19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version
00:21:17.810   19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7
00:21:17.810   19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]]
00:21:17.810    19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls
00:21:17.810    19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:21:18.378   19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false
00:21:18.378   19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]]
00:21:18.378   19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls
00:21:18.378    19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:21:18.378    19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls
00:21:18.636   19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true
00:21:18.636   19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]]
00:21:18.636   19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls
00:21:18.895    19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:21:18.895    19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls
00:21:19.155   19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false
00:21:19.155   19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]]
00:21:19.155    19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1
00:21:19.155    19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1
00:21:19.155    19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest
00:21:19.155    19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:21:19.155    19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff
00:21:19.155    19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1
00:21:19.155    19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python -
00:21:19.155   19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ:
00:21:19.414    19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1
00:21:19.414    19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1
00:21:19.414    19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest
00:21:19.414    19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:21:19.414    19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100
00:21:19.414    19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1
00:21:19.414    19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python -
00:21:19.414   19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y:
00:21:19.414    19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp
00:21:19.414   19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.yda1RV7tUn
00:21:19.414    19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp
00:21:19.414   19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.PUcrIFzmCb
00:21:19.414   19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ:
00:21:19.414   19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y:
00:21:19.414   19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.yda1RV7tUn
00:21:19.414   19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.PUcrIFzmCb
00:21:19.414   19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13
00:21:19.673   19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init
00:21:19.932   19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.yda1RV7tUn
00:21:19.932   19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yda1RV7tUn
00:21:19.932   19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o
00:21:20.191  [2024-12-13 19:05:51.935491] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:21:20.191   19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10
00:21:20.450   19:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k
00:21:20.709  [2024-12-13 19:05:52.375626] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:21:20.709  [2024-12-13 19:05:52.375849] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:21:20.709   19:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0
00:21:20.968  malloc0
00:21:20.968   19:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:21:21.227   19:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yda1RV7tUn
00:21:21.487   19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0
00:21:21.746   19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.yda1RV7tUn
00:21:33.957  Initializing NVMe Controllers
00:21:33.957  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:21:33.957  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:21:33.957  Initialization complete. Launching workers.
00:21:33.957  ========================================================
00:21:33.957                                                                                                               Latency(us)
00:21:33.957  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:21:33.957  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:   10992.24      42.94    5823.73    1882.84    8726.85
00:21:33.957  ========================================================
00:21:33.957  Total                                                                    :   10992.24      42.94    5823.73    1882.84    8726.85
00:21:33.957  
00:21:33.957   19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yda1RV7tUn
00:21:33.957   19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk
00:21:33.957   19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:21:33.957   19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:21:33.957   19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yda1RV7tUn
00:21:33.957   19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:21:33.957   19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=101886
00:21:33.957   19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:21:33.957   19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:21:33.958   19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 101886 /var/tmp/bdevperf.sock
00:21:33.958   19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 101886 ']'
00:21:33.958   19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:21:33.958   19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:21:33.958  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:21:33.958   19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:21:33.958   19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:21:33.958   19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:21:33.958  [2024-12-13 19:06:03.654741] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:21:33.958  [2024-12-13 19:06:03.654846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101886 ]
00:21:33.958  [2024-12-13 19:06:03.810524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:33.958  [2024-12-13 19:06:03.857196] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:21:33.958   19:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:21:33.958   19:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:21:33.958   19:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yda1RV7tUn
00:21:33.958   19:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0
00:21:33.958  [2024-12-13 19:06:05.178888] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:21:33.958  TLSTESTn1
00:21:33.958   19:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests
00:21:33.958  Running I/O for 10 seconds...
00:21:35.833       4714.00 IOPS,    18.41 MiB/s
[2024-12-13T19:06:08.595Z]      4736.00 IOPS,    18.50 MiB/s
[2024-12-13T19:06:09.532Z]      4765.33 IOPS,    18.61 MiB/s
[2024-12-13T19:06:10.530Z]      4768.00 IOPS,    18.62 MiB/s
[2024-12-13T19:06:11.466Z]      4761.60 IOPS,    18.60 MiB/s
[2024-12-13T19:06:12.403Z]      4758.83 IOPS,    18.59 MiB/s
[2024-12-13T19:06:13.782Z]      4767.29 IOPS,    18.62 MiB/s
[2024-12-13T19:06:14.720Z]      4768.00 IOPS,    18.62 MiB/s
[2024-12-13T19:06:15.659Z]      4775.22 IOPS,    18.65 MiB/s
[2024-12-13T19:06:15.659Z]      4774.40 IOPS,    18.65 MiB/s
00:21:43.835                                                                                                  Latency(us)
00:21:43.835  
[2024-12-13T19:06:15.659Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:21:43.835  Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:21:43.835  	 Verification LBA range: start 0x0 length 0x2000
00:21:43.835  	 TLSTESTn1           :      10.02    4775.15      18.65       0.00     0.00   26756.31    6047.19   17635.14
00:21:43.835  
[2024-12-13T19:06:15.659Z]  ===================================================================================================================
00:21:43.835  
[2024-12-13T19:06:15.659Z]  Total                       :               4775.15      18.65       0.00     0.00   26756.31    6047.19   17635.14
00:21:43.835  {
00:21:43.835    "results": [
00:21:43.835      {
00:21:43.835        "job": "TLSTESTn1",
00:21:43.835        "core_mask": "0x4",
00:21:43.835        "workload": "verify",
00:21:43.835        "status": "finished",
00:21:43.835        "verify_range": {
00:21:43.835          "start": 0,
00:21:43.835          "length": 8192
00:21:43.835        },
00:21:43.835        "queue_depth": 128,
00:21:43.835        "io_size": 4096,
00:21:43.835        "runtime": 10.024821,
00:21:43.835        "iops": 4775.147606126832,
00:21:43.835        "mibps": 18.65292033643294,
00:21:43.835        "io_failed": 0,
00:21:43.835        "io_timeout": 0,
00:21:43.835        "avg_latency_us": 26756.310048806426,
00:21:43.835        "min_latency_us": 6047.185454545454,
00:21:43.835        "max_latency_us": 17635.14181818182
00:21:43.835      }
00:21:43.835    ],
00:21:43.835    "core_count": 1
00:21:43.835  }
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 101886
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 101886 ']'
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 101886
00:21:43.835    19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:43.835    19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101886
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:21:43.835  killing process with pid 101886
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101886'
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 101886
00:21:43.835  Received shutdown signal, test time was about 10.000000 seconds
00:21:43.835  
00:21:43.835                                                                                                  Latency(us)
00:21:43.835  
[2024-12-13T19:06:15.659Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:21:43.835  
[2024-12-13T19:06:15.659Z]  ===================================================================================================================
00:21:43.835  
[2024-12-13T19:06:15.659Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 101886
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PUcrIFzmCb
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PUcrIFzmCb
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:43.835    19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PUcrIFzmCb
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PUcrIFzmCb
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=102039
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 102039 /var/tmp/bdevperf.sock
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 102039 ']'
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:21:43.835  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:21:43.835   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:21:44.095  [2024-12-13 19:06:15.691373] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:21:44.095  [2024-12-13 19:06:15.691478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102039 ]
00:21:44.095  [2024-12-13 19:06:15.838689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:44.095  [2024-12-13 19:06:15.879402] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:21:44.354   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:21:44.354   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:21:44.354   19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PUcrIFzmCb
00:21:44.613   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0
00:21:44.872  [2024-12-13 19:06:16.529885] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:21:44.872  [2024-12-13 19:06:16.540018] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected
00:21:44.872  [2024-12-13 19:06:16.540501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c71fd0 (107): Transport endpoint is not connected
00:21:44.872  [2024-12-13 19:06:16.541493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c71fd0 (9): Bad file descriptor
00:21:44.872  [2024-12-13 19:06:16.542491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state
00:21:44.872  [2024-12-13 19:06:16.542515] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3
00:21:44.873  [2024-12-13 19:06:16.542525] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted
00:21:44.873  [2024-12-13 19:06:16.542534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state.
00:21:44.873  2024/12/13 19:06:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error
00:21:44.873  request:
00:21:44.873  {
00:21:44.873    "method": "bdev_nvme_attach_controller",
00:21:44.873    "params": {
00:21:44.873      "name": "TLSTEST",
00:21:44.873      "trtype": "tcp",
00:21:44.873      "traddr": "10.0.0.3",
00:21:44.873      "adrfam": "ipv4",
00:21:44.873      "trsvcid": "4420",
00:21:44.873      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:21:44.873      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:21:44.873      "prchk_reftag": false,
00:21:44.873      "prchk_guard": false,
00:21:44.873      "hdgst": false,
00:21:44.873      "ddgst": false,
00:21:44.873      "psk": "key0",
00:21:44.873      "allow_unrecognized_csi": false
00:21:44.873    }
00:21:44.873  }
00:21:44.873  Got JSON-RPC error response
00:21:44.873  GoRPCClient: error on JSON-RPC call
00:21:44.873   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 102039
00:21:44.873   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 102039 ']'
00:21:44.873   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 102039
00:21:44.873    19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:21:44.873   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:44.873    19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102039
00:21:44.873  killing process with pid 102039
00:21:44.873  Received shutdown signal, test time was about 10.000000 seconds
00:21:44.873  
00:21:44.873                                                                                                  Latency(us)
00:21:44.873  
[2024-12-13T19:06:16.697Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:21:44.873  
[2024-12-13T19:06:16.697Z]  ===================================================================================================================
00:21:44.873  
[2024-12-13T19:06:16.697Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:21:44.873   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:21:44.873   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:21:44.873   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102039'
00:21:44.873   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 102039
00:21:44.873   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 102039
00:21:45.132   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1
00:21:45.132   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1
00:21:45.132   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:21:45.132   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:21:45.132   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:21:45.132   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yda1RV7tUn
00:21:45.132   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0
00:21:45.132   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yda1RV7tUn
00:21:45.132   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf
00:21:45.132   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:45.132    19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf
00:21:45.132   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:45.132   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yda1RV7tUn
00:21:45.132   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk
00:21:45.132   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:21:45.132   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2
00:21:45.133   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yda1RV7tUn
00:21:45.133   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:21:45.133   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=102075
00:21:45.133   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:21:45.133   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:21:45.133   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 102075 /var/tmp/bdevperf.sock
00:21:45.133   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 102075 ']'
00:21:45.133   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:21:45.133  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:21:45.133   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:21:45.133   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:21:45.133   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:21:45.133   19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:21:45.133  [2024-12-13 19:06:16.831707] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:21:45.133  [2024-12-13 19:06:16.831797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102075 ]
00:21:45.392  [2024-12-13 19:06:16.975070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:45.392  [2024-12-13 19:06:17.014210] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:21:46.329   19:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:21:46.329   19:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:21:46.329   19:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yda1RV7tUn
00:21:46.329   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0
00:21:46.588  [2024-12-13 19:06:18.237060] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:21:46.588  [2024-12-13 19:06:18.242138] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1
00:21:46.588  [2024-12-13 19:06:18.242185] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1
00:21:46.588  [2024-12-13 19:06:18.242228] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected
00:21:46.588  [2024-12-13 19:06:18.242835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227bfd0 (107): Transport endpoint is not connected
00:21:46.588  [2024-12-13 19:06:18.243819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227bfd0 (9): Bad file descriptor
00:21:46.588  [2024-12-13 19:06:18.244815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state
00:21:46.588  [2024-12-13 19:06:18.244831] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3
00:21:46.588  [2024-12-13 19:06:18.244856] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted
00:21:46.588  [2024-12-13 19:06:18.244865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state.
00:21:46.588  2024/12/13 19:06:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error
00:21:46.588  request:
00:21:46.588  {
00:21:46.588    "method": "bdev_nvme_attach_controller",
00:21:46.588    "params": {
00:21:46.588      "name": "TLSTEST",
00:21:46.588      "trtype": "tcp",
00:21:46.588      "traddr": "10.0.0.3",
00:21:46.588      "adrfam": "ipv4",
00:21:46.588      "trsvcid": "4420",
00:21:46.588      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:21:46.588      "hostnqn": "nqn.2016-06.io.spdk:host2",
00:21:46.588      "prchk_reftag": false,
00:21:46.588      "prchk_guard": false,
00:21:46.588      "hdgst": false,
00:21:46.588      "ddgst": false,
00:21:46.588      "psk": "key0",
00:21:46.588      "allow_unrecognized_csi": false
00:21:46.588    }
00:21:46.588  }
00:21:46.588  Got JSON-RPC error response
00:21:46.588  GoRPCClient: error on JSON-RPC call
00:21:46.588   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 102075
00:21:46.588   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 102075 ']'
00:21:46.588   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 102075
00:21:46.588    19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:21:46.588   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:46.588    19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102075
00:21:46.588   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:21:46.588   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:21:46.588  killing process with pid 102075
00:21:46.588   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102075'
00:21:46.588   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 102075
00:21:46.588  Received shutdown signal, test time was about 10.000000 seconds
00:21:46.588  
00:21:46.589                                                                                                  Latency(us)
00:21:46.589  
[2024-12-13T19:06:18.413Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:21:46.589  
[2024-12-13T19:06:18.413Z]  ===================================================================================================================
00:21:46.589  
[2024-12-13T19:06:18.413Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:21:46.589   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 102075
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yda1RV7tUn
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yda1RV7tUn
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:46.848    19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yda1RV7tUn
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yda1RV7tUn
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=102132
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 102132 /var/tmp/bdevperf.sock
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 102132 ']'
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:21:46.848  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:21:46.848   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:21:46.848  [2024-12-13 19:06:18.519143] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:21:46.848  [2024-12-13 19:06:18.519248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102132 ]
00:21:46.848  [2024-12-13 19:06:18.659856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:47.107  [2024-12-13 19:06:18.692207] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:21:47.107   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:21:47.107   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:21:47.107   19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yda1RV7tUn
00:21:47.366   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0
00:21:47.625  [2024-12-13 19:06:19.311049] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:21:47.625  [2024-12-13 19:06:19.320469] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2
00:21:47.625  [2024-12-13 19:06:19.320506] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2
00:21:47.625  [2024-12-13 19:06:19.320553] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected
00:21:47.625  [2024-12-13 19:06:19.320815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x162dfd0 (107): Transport endpoint is not connected
00:21:47.625  [2024-12-13 19:06:19.321806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x162dfd0 (9): Bad file descriptor
00:21:47.625  [2024-12-13 19:06:19.322807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state
00:21:47.625  [2024-12-13 19:06:19.322823] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3
00:21:47.625  [2024-12-13 19:06:19.322832] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted
00:21:47.625  [2024-12-13 19:06:19.322843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state.
00:21:47.626  2024/12/13 19:06:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error
00:21:47.626  request:
00:21:47.626  {
00:21:47.626    "method": "bdev_nvme_attach_controller",
00:21:47.626    "params": {
00:21:47.626      "name": "TLSTEST",
00:21:47.626      "trtype": "tcp",
00:21:47.626      "traddr": "10.0.0.3",
00:21:47.626      "adrfam": "ipv4",
00:21:47.626      "trsvcid": "4420",
00:21:47.626      "subnqn": "nqn.2016-06.io.spdk:cnode2",
00:21:47.626      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:21:47.626      "prchk_reftag": false,
00:21:47.626      "prchk_guard": false,
00:21:47.626      "hdgst": false,
00:21:47.626      "ddgst": false,
00:21:47.626      "psk": "key0",
00:21:47.626      "allow_unrecognized_csi": false
00:21:47.626    }
00:21:47.626  }
00:21:47.626  Got JSON-RPC error response
00:21:47.626  GoRPCClient: error on JSON-RPC call
00:21:47.626   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 102132
00:21:47.626   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 102132 ']'
00:21:47.626   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 102132
00:21:47.626    19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:21:47.626   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:47.626    19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102132
00:21:47.626   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:21:47.626   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:21:47.626  killing process with pid 102132
00:21:47.626   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102132'
00:21:47.626   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 102132
00:21:47.626  Received shutdown signal, test time was about 10.000000 seconds
00:21:47.626  
00:21:47.626                                                                                                  Latency(us)
00:21:47.626  
[2024-12-13T19:06:19.450Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:21:47.626  
[2024-12-13T19:06:19.450Z]  ===================================================================================================================
00:21:47.626  
[2024-12-13T19:06:19.450Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:21:47.626   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 102132
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 ''
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 ''
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:47.885    19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 ''
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=102171
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 102171 /var/tmp/bdevperf.sock
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 102171 ']'
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:21:47.885  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:21:47.885   19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:21:47.885  [2024-12-13 19:06:19.609060] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:21:47.885  [2024-12-13 19:06:19.609167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102171 ]
00:21:48.144  [2024-12-13 19:06:19.757340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:48.144  [2024-12-13 19:06:19.794178] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:21:48.714   19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:21:48.714   19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:21:48.714   19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 ''
00:21:48.973  [2024-12-13 19:06:20.717873] keyring.c:  24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 
00:21:48.973  [2024-12-13 19:06:20.717911] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring
00:21:48.973  2024/12/13 19:06:20 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted
00:21:48.973  request:
00:21:48.973  {
00:21:48.973    "method": "keyring_file_add_key",
00:21:48.973    "params": {
00:21:48.973      "name": "key0",
00:21:48.973      "path": ""
00:21:48.973    }
00:21:48.973  }
00:21:48.973  Got JSON-RPC error response
00:21:48.973  GoRPCClient: error on JSON-RPC call
00:21:48.973   19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0
00:21:49.233  [2024-12-13 19:06:20.946067] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:21:49.233  [2024-12-13 19:06:20.946149] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0
00:21:49.233  2024/12/13 19:06:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available
00:21:49.233  request:
00:21:49.233  {
00:21:49.233    "method": "bdev_nvme_attach_controller",
00:21:49.233    "params": {
00:21:49.233      "name": "TLSTEST",
00:21:49.233      "trtype": "tcp",
00:21:49.233      "traddr": "10.0.0.3",
00:21:49.233      "adrfam": "ipv4",
00:21:49.233      "trsvcid": "4420",
00:21:49.233      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:21:49.233      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:21:49.233      "prchk_reftag": false,
00:21:49.233      "prchk_guard": false,
00:21:49.233      "hdgst": false,
00:21:49.233      "ddgst": false,
00:21:49.233      "psk": "key0",
00:21:49.233      "allow_unrecognized_csi": false
00:21:49.233    }
00:21:49.233  }
00:21:49.233  Got JSON-RPC error response
00:21:49.233  GoRPCClient: error on JSON-RPC call
00:21:49.233   19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 102171
00:21:49.233   19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 102171 ']'
00:21:49.233   19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 102171
00:21:49.233    19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:21:49.233   19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:49.233    19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102171
00:21:49.233   19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:21:49.233  killing process with pid 102171
00:21:49.233   19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:21:49.233   19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102171'
00:21:49.233  Received shutdown signal, test time was about 10.000000 seconds
00:21:49.233  
00:21:49.233                                                                                                  Latency(us)
00:21:49.233  
[2024-12-13T19:06:21.057Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:21:49.233  
[2024-12-13T19:06:21.057Z]  ===================================================================================================================
00:21:49.233  
[2024-12-13T19:06:21.057Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:21:49.233   19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 102171
00:21:49.233   19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 102171
00:21:49.491   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1
00:21:49.491   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1
00:21:49.491   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:21:49.491   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:21:49.491   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:21:49.491   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 101530
00:21:49.491   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 101530 ']'
00:21:49.491   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 101530
00:21:49.491    19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:21:49.491   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:49.491    19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101530
00:21:49.491   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:21:49.491   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:21:49.491  killing process with pid 101530
00:21:49.491   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101530'
00:21:49.491   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 101530
00:21:49.491   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 101530
00:21:49.751    19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2
00:21:49.751    19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2
00:21:49.751    19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest
00:21:49.751    19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:21:49.751    19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677
00:21:49.751    19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2
00:21:49.751    19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python -
00:21:49.751   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==:
00:21:49.751    19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp
00:21:49.751   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.JAqPPeMAgY
00:21:49.751   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==:
00:21:49.751   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.JAqPPeMAgY
00:21:49.751   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2
00:21:49.751   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:21:49.751   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:21:49.751   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:21:49.751   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=102234
00:21:49.751   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 102234
00:21:49.751   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 102234 ']'
00:21:49.751   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:21:49.751   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:21:49.751   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:21:49.751  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:49.751   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:21:49.751   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:21:49.751   19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:21:49.751  [2024-12-13 19:06:21.507616] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:21:49.751  [2024-12-13 19:06:21.507732] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:21:50.010  [2024-12-13 19:06:21.650396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:50.010  [2024-12-13 19:06:21.691525] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:21:50.010  [2024-12-13 19:06:21.691602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:21:50.010  [2024-12-13 19:06:21.691627] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:21:50.010  [2024-12-13 19:06:21.691635] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:21:50.010  [2024-12-13 19:06:21.691641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:21:50.010  [2024-12-13 19:06:21.692050] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:21:50.947   19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:21:50.947   19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:21:50.947   19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:21:50.947   19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:21:50.947   19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:21:50.947   19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:21:50.947   19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.JAqPPeMAgY
00:21:50.947   19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JAqPPeMAgY
00:21:50.947   19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o
00:21:50.947  [2024-12-13 19:06:22.744571] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:21:50.947   19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10
00:21:51.514   19:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k
00:21:51.514  [2024-12-13 19:06:23.248670] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:21:51.514  [2024-12-13 19:06:23.248971] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:21:51.514   19:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0
00:21:51.773  malloc0
00:21:51.773   19:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:21:52.032   19:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JAqPPeMAgY
00:21:52.292   19:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0
00:21:52.551   19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JAqPPeMAgY
00:21:52.551   19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk
00:21:52.551   19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:21:52.551   19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:21:52.551   19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JAqPPeMAgY
00:21:52.551   19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:21:52.551   19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=102349
00:21:52.551   19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:21:52.551   19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:21:52.551   19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 102349 /var/tmp/bdevperf.sock
00:21:52.551   19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 102349 ']'
00:21:52.551   19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:21:52.551   19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:21:52.551  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:21:52.551   19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:21:52.551   19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:21:52.551   19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:21:52.551  [2024-12-13 19:06:24.261451] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:21:52.551  [2024-12-13 19:06:24.262034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102349 ]
00:21:52.810  [2024-12-13 19:06:24.408755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:52.810  [2024-12-13 19:06:24.446735] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:21:52.810   19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:21:52.810   19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:21:52.810   19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JAqPPeMAgY
00:21:53.083   19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0
00:21:53.381  [2024-12-13 19:06:25.134218] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:21:53.651  TLSTESTn1
00:21:53.651   19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests
00:21:53.651  Running I/O for 10 seconds...
00:21:55.526       4992.00 IOPS,    19.50 MiB/s
[2024-12-13T19:06:28.729Z]      4992.00 IOPS,    19.50 MiB/s
[2024-12-13T19:06:29.666Z]      4997.33 IOPS,    19.52 MiB/s
[2024-12-13T19:06:30.605Z]      5007.75 IOPS,    19.56 MiB/s
[2024-12-13T19:06:31.542Z]      5004.40 IOPS,    19.55 MiB/s
[2024-12-13T19:06:32.480Z]      5011.17 IOPS,    19.57 MiB/s
[2024-12-13T19:06:33.416Z]      5029.29 IOPS,    19.65 MiB/s
[2024-12-13T19:06:34.352Z]      5032.50 IOPS,    19.66 MiB/s
[2024-12-13T19:06:35.729Z]      5033.78 IOPS,    19.66 MiB/s
[2024-12-13T19:06:35.729Z]      5038.30 IOPS,    19.68 MiB/s
00:22:03.905                                                                                                  Latency(us)
00:22:03.905  
[2024-12-13T19:06:35.729Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:03.905  Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:22:03.905  	 Verification LBA range: start 0x0 length 0x2000
00:22:03.905  	 TLSTESTn1           :      10.01    5044.27      19.70       0.00     0.00   25332.25    4408.79   19660.80
00:22:03.905  
[2024-12-13T19:06:35.729Z]  ===================================================================================================================
00:22:03.905  
[2024-12-13T19:06:35.729Z]  Total                       :               5044.27      19.70       0.00     0.00   25332.25    4408.79   19660.80
00:22:03.905  {
00:22:03.905    "results": [
00:22:03.905      {
00:22:03.905        "job": "TLSTESTn1",
00:22:03.905        "core_mask": "0x4",
00:22:03.905        "workload": "verify",
00:22:03.905        "status": "finished",
00:22:03.905        "verify_range": {
00:22:03.905          "start": 0,
00:22:03.905          "length": 8192
00:22:03.905        },
00:22:03.905        "queue_depth": 128,
00:22:03.905        "io_size": 4096,
00:22:03.905        "runtime": 10.013336,
00:22:03.905        "iops": 5044.272957583767,
00:22:03.905        "mibps": 19.704191240561588,
00:22:03.905        "io_failed": 0,
00:22:03.905        "io_timeout": 0,
00:22:03.905        "avg_latency_us": 25332.251599215277,
00:22:03.905        "min_latency_us": 4408.785454545455,
00:22:03.905        "max_latency_us": 19660.8
00:22:03.905      }
00:22:03.905    ],
00:22:03.905    "core_count": 1
00:22:03.905  }
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 102349
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 102349 ']'
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 102349
00:22:03.905    19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:03.905    19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102349
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:22:03.905  killing process with pid 102349
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102349'
00:22:03.905  Received shutdown signal, test time was about 10.000000 seconds
00:22:03.905  
00:22:03.905                                                                                                  Latency(us)
00:22:03.905  
[2024-12-13T19:06:35.729Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:03.905  
[2024-12-13T19:06:35.729Z]  ===================================================================================================================
00:22:03.905  
[2024-12-13T19:06:35.729Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 102349
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 102349
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.JAqPPeMAgY
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JAqPPeMAgY
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JAqPPeMAgY
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:03.905    19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JAqPPeMAgY
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JAqPPeMAgY
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=102490
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 102490 /var/tmp/bdevperf.sock
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 102490 ']'
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:03.905  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:03.905   19:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:03.905  [2024-12-13 19:06:35.645332] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:22:03.905  [2024-12-13 19:06:35.645457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102490 ]
00:22:04.163  [2024-12-13 19:06:35.793770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:04.163  [2024-12-13 19:06:35.828504] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:22:04.731   19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:04.731   19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:22:04.731   19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JAqPPeMAgY
00:22:04.990  [2024-12-13 19:06:36.792077] keyring.c:  36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.JAqPPeMAgY': 0100666
00:22:04.990  [2024-12-13 19:06:36.792129] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring
00:22:04.990  2024/12/13 19:06:36 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.JAqPPeMAgY], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted
00:22:04.990  request:
00:22:04.990  {
00:22:04.990    "method": "keyring_file_add_key",
00:22:04.990    "params": {
00:22:04.990      "name": "key0",
00:22:04.990      "path": "/tmp/tmp.JAqPPeMAgY"
00:22:04.990    }
00:22:04.990  }
00:22:04.990  Got JSON-RPC error response
00:22:04.990  GoRPCClient: error on JSON-RPC call
00:22:04.990   19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0
00:22:05.558  [2024-12-13 19:06:37.080213] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:22:05.558  [2024-12-13 19:06:37.080309] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0
00:22:05.558  2024/12/13 19:06:37 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available
00:22:05.558  request:
00:22:05.558  {
00:22:05.558    "method": "bdev_nvme_attach_controller",
00:22:05.558    "params": {
00:22:05.558      "name": "TLSTEST",
00:22:05.558      "trtype": "tcp",
00:22:05.558      "traddr": "10.0.0.3",
00:22:05.558      "adrfam": "ipv4",
00:22:05.558      "trsvcid": "4420",
00:22:05.558      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:22:05.558      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:22:05.558      "prchk_reftag": false,
00:22:05.558      "prchk_guard": false,
00:22:05.558      "hdgst": false,
00:22:05.558      "ddgst": false,
00:22:05.558      "psk": "key0",
00:22:05.558      "allow_unrecognized_csi": false
00:22:05.558    }
00:22:05.558  }
00:22:05.558  Got JSON-RPC error response
00:22:05.558  GoRPCClient: error on JSON-RPC call
00:22:05.558   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 102490
00:22:05.559   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 102490 ']'
00:22:05.559   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 102490
00:22:05.559    19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:22:05.559   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:05.559    19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102490
00:22:05.559   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:22:05.559   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:22:05.559  killing process with pid 102490
00:22:05.559   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102490'
00:22:05.559   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 102490
00:22:05.559  Received shutdown signal, test time was about 10.000000 seconds
00:22:05.559  
00:22:05.559                                                                                                  Latency(us)
00:22:05.559  
[2024-12-13T19:06:37.383Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:05.559  
[2024-12-13T19:06:37.383Z]  ===================================================================================================================
00:22:05.559  
[2024-12-13T19:06:37.383Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:22:05.559   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 102490
00:22:05.559   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1
00:22:05.559   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1
00:22:05.559   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:22:05.559   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:22:05.559   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:22:05.559   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 102234
00:22:05.559   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 102234 ']'
00:22:05.559   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 102234
00:22:05.559    19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:22:05.559   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:05.559    19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102234
00:22:05.559   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:22:05.559   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:22:05.559  killing process with pid 102234
00:22:05.559   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102234'
00:22:05.559   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 102234
00:22:05.559   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 102234
00:22:05.818   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2
00:22:05.818   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:22:05.818   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:22:05.818   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:05.818   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=102553
00:22:05.818   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:22:05.818   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 102553
00:22:05.818   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 102553 ']'
00:22:05.818   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:05.818   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:05.818  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:22:05.818   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:05.818   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:05.818   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:05.818  [2024-12-13 19:06:37.587707] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:22:05.818  [2024-12-13 19:06:37.587811] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:22:06.077  [2024-12-13 19:06:37.722618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:06.077  [2024-12-13 19:06:37.753107] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:22:06.077  [2024-12-13 19:06:37.753184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:22:06.077  [2024-12-13 19:06:37.753195] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:22:06.077  [2024-12-13 19:06:37.753203] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:22:06.077  [2024-12-13 19:06:37.753209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:22:06.077  [2024-12-13 19:06:37.753545] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:22:06.077   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:06.077   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:22:06.077   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:22:06.077   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:22:06.077   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:06.336   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:22:06.336   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.JAqPPeMAgY
00:22:06.336   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0
00:22:06.336   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.JAqPPeMAgY
00:22:06.336   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt
00:22:06.336   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:06.336    19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt
00:22:06.336   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:06.336   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.JAqPPeMAgY
00:22:06.336   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JAqPPeMAgY
00:22:06.336   19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o
00:22:06.595  [2024-12-13 19:06:38.193662] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:22:06.595   19:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10
00:22:06.854   19:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k
00:22:07.113  [2024-12-13 19:06:38.689743] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:22:07.113  [2024-12-13 19:06:38.689962] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:22:07.113   19:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0
00:22:07.113  malloc0
00:22:07.113   19:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:22:07.372   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JAqPPeMAgY
00:22:07.630  [2024-12-13 19:06:39.384848] keyring.c:  36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.JAqPPeMAgY': 0100666
00:22:07.630  [2024-12-13 19:06:39.384886] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring
00:22:07.630  2024/12/13 19:06:39 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.JAqPPeMAgY], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted
00:22:07.630  request:
00:22:07.630  {
00:22:07.630    "method": "keyring_file_add_key",
00:22:07.630    "params": {
00:22:07.630      "name": "key0",
00:22:07.631      "path": "/tmp/tmp.JAqPPeMAgY"
00:22:07.631    }
00:22:07.631  }
00:22:07.631  Got JSON-RPC error response
00:22:07.631  GoRPCClient: error on JSON-RPC call
00:22:07.631   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0
00:22:07.889  [2024-12-13 19:06:39.616916] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist
00:22:07.889  [2024-12-13 19:06:39.616986] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport
00:22:07.889  2024/12/13 19:06:39 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error
00:22:07.889  request:
00:22:07.889  {
00:22:07.890    "method": "nvmf_subsystem_add_host",
00:22:07.890    "params": {
00:22:07.890      "nqn": "nqn.2016-06.io.spdk:cnode1",
00:22:07.890      "host": "nqn.2016-06.io.spdk:host1",
00:22:07.890      "psk": "key0"
00:22:07.890    }
00:22:07.890  }
00:22:07.890  Got JSON-RPC error response
00:22:07.890  GoRPCClient: error on JSON-RPC call
00:22:07.890   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1
00:22:07.890   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:22:07.890   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:22:07.890   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:22:07.890   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 102553
00:22:07.890   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 102553 ']'
00:22:07.890   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 102553
00:22:07.890    19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:22:07.890   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:07.890    19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102553
00:22:07.890   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:22:07.890   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:22:07.890  killing process with pid 102553
00:22:07.890   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102553'
00:22:07.890   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 102553
00:22:07.890   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 102553
00:22:08.149   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.JAqPPeMAgY
00:22:08.149   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2
00:22:08.149   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:22:08.149   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:22:08.149   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:08.149   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=102658
00:22:08.149   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:22:08.149   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 102658
00:22:08.149   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 102658 ']'
00:22:08.149   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:08.149   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:08.149   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:08.149  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:22:08.149   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:08.149   19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:08.149  [2024-12-13 19:06:39.934583] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:22:08.149  [2024-12-13 19:06:39.934697] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:22:08.408  [2024-12-13 19:06:40.081957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:08.408  [2024-12-13 19:06:40.113736] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:22:08.408  [2024-12-13 19:06:40.113813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:22:08.408  [2024-12-13 19:06:40.113840] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:22:08.408  [2024-12-13 19:06:40.113847] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:22:08.408  [2024-12-13 19:06:40.113854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:22:08.408  [2024-12-13 19:06:40.114296] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:22:08.666   19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:08.667   19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:22:08.667   19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:22:08.667   19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:22:08.667   19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:08.667   19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:22:08.667   19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.JAqPPeMAgY
00:22:08.667   19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JAqPPeMAgY
00:22:08.667   19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o
00:22:08.925  [2024-12-13 19:06:40.554735] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:22:08.925   19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10
00:22:09.184   19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k
00:22:09.443  [2024-12-13 19:06:41.050847] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:22:09.443  [2024-12-13 19:06:41.051095] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:22:09.443   19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0
00:22:09.701  malloc0
00:22:09.701   19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:22:09.960   19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JAqPPeMAgY
00:22:10.218   19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0
00:22:10.478   19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=102753
00:22:10.478   19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:22:10.478   19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:22:10.478   19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 102753 /var/tmp/bdevperf.sock
00:22:10.478   19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 102753 ']'
00:22:10.478   19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:22:10.478   19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:10.478  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:22:10.478   19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:22:10.478   19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:10.478   19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:10.478  [2024-12-13 19:06:42.215659] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:22:10.478  [2024-12-13 19:06:42.215776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102753 ]
00:22:10.752  [2024-12-13 19:06:42.360645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:10.752  [2024-12-13 19:06:42.393193] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:22:10.752   19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:10.752   19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:22:10.752   19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JAqPPeMAgY
00:22:11.027   19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0
00:22:11.286  [2024-12-13 19:06:43.006929] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:22:11.286  TLSTESTn1
00:22:11.286    19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:22:11.855   19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{
00:22:11.855    "subsystems": [
00:22:11.855      {
00:22:11.855        "subsystem": "keyring",
00:22:11.855        "config": [
00:22:11.855          {
00:22:11.855            "method": "keyring_file_add_key",
00:22:11.855            "params": {
00:22:11.855              "name": "key0",
00:22:11.855              "path": "/tmp/tmp.JAqPPeMAgY"
00:22:11.855            }
00:22:11.855          }
00:22:11.855        ]
00:22:11.855      },
00:22:11.855      {
00:22:11.855        "subsystem": "iobuf",
00:22:11.855        "config": [
00:22:11.855          {
00:22:11.855            "method": "iobuf_set_options",
00:22:11.855            "params": {
00:22:11.855              "enable_numa": false,
00:22:11.855              "large_bufsize": 135168,
00:22:11.855              "large_pool_count": 1024,
00:22:11.855              "small_bufsize": 8192,
00:22:11.855              "small_pool_count": 8192
00:22:11.855            }
00:22:11.855          }
00:22:11.855        ]
00:22:11.855      },
00:22:11.855      {
00:22:11.855        "subsystem": "sock",
00:22:11.855        "config": [
00:22:11.855          {
00:22:11.855            "method": "sock_set_default_impl",
00:22:11.855            "params": {
00:22:11.855              "impl_name": "posix"
00:22:11.855            }
00:22:11.855          },
00:22:11.855          {
00:22:11.855            "method": "sock_impl_set_options",
00:22:11.855            "params": {
00:22:11.855              "enable_ktls": false,
00:22:11.855              "enable_placement_id": 0,
00:22:11.855              "enable_quickack": false,
00:22:11.855              "enable_recv_pipe": true,
00:22:11.855              "enable_zerocopy_send_client": false,
00:22:11.855              "enable_zerocopy_send_server": true,
00:22:11.855              "impl_name": "ssl",
00:22:11.855              "recv_buf_size": 4096,
00:22:11.855              "send_buf_size": 4096,
00:22:11.855              "tls_version": 0,
00:22:11.855              "zerocopy_threshold": 0
00:22:11.855            }
00:22:11.855          },
00:22:11.855          {
00:22:11.855            "method": "sock_impl_set_options",
00:22:11.855            "params": {
00:22:11.855              "enable_ktls": false,
00:22:11.855              "enable_placement_id": 0,
00:22:11.855              "enable_quickack": false,
00:22:11.855              "enable_recv_pipe": true,
00:22:11.855              "enable_zerocopy_send_client": false,
00:22:11.855              "enable_zerocopy_send_server": true,
00:22:11.855              "impl_name": "posix",
00:22:11.855              "recv_buf_size": 2097152,
00:22:11.855              "send_buf_size": 2097152,
00:22:11.855              "tls_version": 0,
00:22:11.855              "zerocopy_threshold": 0
00:22:11.855            }
00:22:11.855          }
00:22:11.855        ]
00:22:11.855      },
00:22:11.855      {
00:22:11.855        "subsystem": "vmd",
00:22:11.855        "config": []
00:22:11.855      },
00:22:11.855      {
00:22:11.855        "subsystem": "accel",
00:22:11.855        "config": [
00:22:11.855          {
00:22:11.855            "method": "accel_set_options",
00:22:11.855            "params": {
00:22:11.855              "buf_count": 2048,
00:22:11.855              "large_cache_size": 16,
00:22:11.855              "sequence_count": 2048,
00:22:11.855              "small_cache_size": 128,
00:22:11.855              "task_count": 2048
00:22:11.855            }
00:22:11.855          }
00:22:11.855        ]
00:22:11.855      },
00:22:11.855      {
00:22:11.855        "subsystem": "bdev",
00:22:11.855        "config": [
00:22:11.855          {
00:22:11.855            "method": "bdev_set_options",
00:22:11.855            "params": {
00:22:11.855              "bdev_auto_examine": true,
00:22:11.855              "bdev_io_cache_size": 256,
00:22:11.855              "bdev_io_pool_size": 65535,
00:22:11.855              "iobuf_large_cache_size": 16,
00:22:11.855              "iobuf_small_cache_size": 128
00:22:11.855            }
00:22:11.855          },
00:22:11.855          {
00:22:11.855            "method": "bdev_raid_set_options",
00:22:11.855            "params": {
00:22:11.855              "process_max_bandwidth_mb_sec": 0,
00:22:11.855              "process_window_size_kb": 1024
00:22:11.856            }
00:22:11.856          },
00:22:11.856          {
00:22:11.856            "method": "bdev_iscsi_set_options",
00:22:11.856            "params": {
00:22:11.856              "timeout_sec": 30
00:22:11.856            }
00:22:11.856          },
00:22:11.856          {
00:22:11.856            "method": "bdev_nvme_set_options",
00:22:11.856            "params": {
00:22:11.856              "action_on_timeout": "none",
00:22:11.856              "allow_accel_sequence": false,
00:22:11.856              "arbitration_burst": 0,
00:22:11.856              "bdev_retry_count": 3,
00:22:11.856              "ctrlr_loss_timeout_sec": 0,
00:22:11.856              "delay_cmd_submit": true,
00:22:11.856              "dhchap_dhgroups": [
00:22:11.856                "null",
00:22:11.856                "ffdhe2048",
00:22:11.856                "ffdhe3072",
00:22:11.856                "ffdhe4096",
00:22:11.856                "ffdhe6144",
00:22:11.856                "ffdhe8192"
00:22:11.856              ],
00:22:11.856              "dhchap_digests": [
00:22:11.856                "sha256",
00:22:11.856                "sha384",
00:22:11.856                "sha512"
00:22:11.856              ],
00:22:11.856              "disable_auto_failback": false,
00:22:11.856              "fast_io_fail_timeout_sec": 0,
00:22:11.856              "generate_uuids": false,
00:22:11.856              "high_priority_weight": 0,
00:22:11.856              "io_path_stat": false,
00:22:11.856              "io_queue_requests": 0,
00:22:11.856              "keep_alive_timeout_ms": 10000,
00:22:11.856              "low_priority_weight": 0,
00:22:11.856              "medium_priority_weight": 0,
00:22:11.856              "nvme_adminq_poll_period_us": 10000,
00:22:11.856              "nvme_error_stat": false,
00:22:11.856              "nvme_ioq_poll_period_us": 0,
00:22:11.856              "rdma_cm_event_timeout_ms": 0,
00:22:11.856              "rdma_max_cq_size": 0,
00:22:11.856              "rdma_srq_size": 0,
00:22:11.856              "rdma_umr_per_io": false,
00:22:11.856              "reconnect_delay_sec": 0,
00:22:11.856              "timeout_admin_us": 0,
00:22:11.856              "timeout_us": 0,
00:22:11.856              "transport_ack_timeout": 0,
00:22:11.856              "transport_retry_count": 4,
00:22:11.856              "transport_tos": 0
00:22:11.856            }
00:22:11.856          },
00:22:11.856          {
00:22:11.856            "method": "bdev_nvme_set_hotplug",
00:22:11.856            "params": {
00:22:11.856              "enable": false,
00:22:11.856              "period_us": 100000
00:22:11.856            }
00:22:11.856          },
00:22:11.856          {
00:22:11.856            "method": "bdev_malloc_create",
00:22:11.856            "params": {
00:22:11.856              "block_size": 4096,
00:22:11.856              "dif_is_head_of_md": false,
00:22:11.856              "dif_pi_format": 0,
00:22:11.856              "dif_type": 0,
00:22:11.856              "md_size": 0,
00:22:11.856              "name": "malloc0",
00:22:11.856              "num_blocks": 8192,
00:22:11.856              "optimal_io_boundary": 0,
00:22:11.856              "physical_block_size": 4096,
00:22:11.856              "uuid": "0d29430e-9993-4a4d-9250-2f440b6fff04"
00:22:11.856            }
00:22:11.856          },
00:22:11.856          {
00:22:11.856            "method": "bdev_wait_for_examine"
00:22:11.856          }
00:22:11.856        ]
00:22:11.856      },
00:22:11.856      {
00:22:11.856        "subsystem": "nbd",
00:22:11.856        "config": []
00:22:11.856      },
00:22:11.856      {
00:22:11.856        "subsystem": "scheduler",
00:22:11.856        "config": [
00:22:11.856          {
00:22:11.856            "method": "framework_set_scheduler",
00:22:11.856            "params": {
00:22:11.856              "name": "static"
00:22:11.856            }
00:22:11.856          }
00:22:11.856        ]
00:22:11.856      },
00:22:11.856      {
00:22:11.856        "subsystem": "nvmf",
00:22:11.856        "config": [
00:22:11.856          {
00:22:11.856            "method": "nvmf_set_config",
00:22:11.856            "params": {
00:22:11.856              "admin_cmd_passthru": {
00:22:11.856                "identify_ctrlr": false
00:22:11.856              },
00:22:11.856              "dhchap_dhgroups": [
00:22:11.856                "null",
00:22:11.856                "ffdhe2048",
00:22:11.856                "ffdhe3072",
00:22:11.856                "ffdhe4096",
00:22:11.856                "ffdhe6144",
00:22:11.856                "ffdhe8192"
00:22:11.856              ],
00:22:11.856              "dhchap_digests": [
00:22:11.856                "sha256",
00:22:11.856                "sha384",
00:22:11.856                "sha512"
00:22:11.856              ],
00:22:11.856              "discovery_filter": "match_any"
00:22:11.856            }
00:22:11.856          },
00:22:11.856          {
00:22:11.856            "method": "nvmf_set_max_subsystems",
00:22:11.856            "params": {
00:22:11.856              "max_subsystems": 1024
00:22:11.856            }
00:22:11.856          },
00:22:11.856          {
00:22:11.856            "method": "nvmf_set_crdt",
00:22:11.856            "params": {
00:22:11.856              "crdt1": 0,
00:22:11.856              "crdt2": 0,
00:22:11.856              "crdt3": 0
00:22:11.856            }
00:22:11.856          },
00:22:11.856          {
00:22:11.856            "method": "nvmf_create_transport",
00:22:11.856            "params": {
00:22:11.856              "abort_timeout_sec": 1,
00:22:11.856              "ack_timeout": 0,
00:22:11.856              "buf_cache_size": 4294967295,
00:22:11.856              "c2h_success": false,
00:22:11.856              "data_wr_pool_size": 0,
00:22:11.856              "dif_insert_or_strip": false,
00:22:11.856              "in_capsule_data_size": 4096,
00:22:11.856              "io_unit_size": 131072,
00:22:11.856              "max_aq_depth": 128,
00:22:11.856              "max_io_qpairs_per_ctrlr": 127,
00:22:11.856              "max_io_size": 131072,
00:22:11.856              "max_queue_depth": 128,
00:22:11.856              "num_shared_buffers": 511,
00:22:11.856              "sock_priority": 0,
00:22:11.856              "trtype": "TCP",
00:22:11.856              "zcopy": false
00:22:11.856            }
00:22:11.856          },
00:22:11.856          {
00:22:11.856            "method": "nvmf_create_subsystem",
00:22:11.856            "params": {
00:22:11.856              "allow_any_host": false,
00:22:11.856              "ana_reporting": false,
00:22:11.856              "max_cntlid": 65519,
00:22:11.856              "max_namespaces": 10,
00:22:11.856              "min_cntlid": 1,
00:22:11.856              "model_number": "SPDK bdev Controller",
00:22:11.856              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:22:11.856              "serial_number": "SPDK00000000000001"
00:22:11.856            }
00:22:11.856          },
00:22:11.856          {
00:22:11.856            "method": "nvmf_subsystem_add_host",
00:22:11.856            "params": {
00:22:11.856              "host": "nqn.2016-06.io.spdk:host1",
00:22:11.856              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:22:11.856              "psk": "key0"
00:22:11.856            }
00:22:11.856          },
00:22:11.856          {
00:22:11.856            "method": "nvmf_subsystem_add_ns",
00:22:11.856            "params": {
00:22:11.856              "namespace": {
00:22:11.856                "bdev_name": "malloc0",
00:22:11.856                "nguid": "0D29430E99934A4D92502F440B6FFF04",
00:22:11.856                "no_auto_visible": false,
00:22:11.856                "nsid": 1,
00:22:11.856                "uuid": "0d29430e-9993-4a4d-9250-2f440b6fff04"
00:22:11.856              },
00:22:11.856              "nqn": "nqn.2016-06.io.spdk:cnode1"
00:22:11.856            }
00:22:11.856          },
00:22:11.856          {
00:22:11.856            "method": "nvmf_subsystem_add_listener",
00:22:11.856            "params": {
00:22:11.856              "listen_address": {
00:22:11.856                "adrfam": "IPv4",
00:22:11.856                "traddr": "10.0.0.3",
00:22:11.856                "trsvcid": "4420",
00:22:11.856                "trtype": "TCP"
00:22:11.856              },
00:22:11.856              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:22:11.856              "secure_channel": true
00:22:11.857            }
00:22:11.857          }
00:22:11.857        ]
00:22:11.857      }
00:22:11.857    ]
00:22:11.857  }'
00:22:11.857    19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config
00:22:12.117   19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{
00:22:12.117    "subsystems": [
00:22:12.117      {
00:22:12.117        "subsystem": "keyring",
00:22:12.117        "config": [
00:22:12.117          {
00:22:12.117            "method": "keyring_file_add_key",
00:22:12.117            "params": {
00:22:12.117              "name": "key0",
00:22:12.117              "path": "/tmp/tmp.JAqPPeMAgY"
00:22:12.117            }
00:22:12.117          }
00:22:12.117        ]
00:22:12.117      },
00:22:12.117      {
00:22:12.117        "subsystem": "iobuf",
00:22:12.117        "config": [
00:22:12.117          {
00:22:12.117            "method": "iobuf_set_options",
00:22:12.117            "params": {
00:22:12.117              "enable_numa": false,
00:22:12.117              "large_bufsize": 135168,
00:22:12.117              "large_pool_count": 1024,
00:22:12.117              "small_bufsize": 8192,
00:22:12.117              "small_pool_count": 8192
00:22:12.117            }
00:22:12.117          }
00:22:12.117        ]
00:22:12.117      },
00:22:12.117      {
00:22:12.117        "subsystem": "sock",
00:22:12.117        "config": [
00:22:12.117          {
00:22:12.117            "method": "sock_set_default_impl",
00:22:12.117            "params": {
00:22:12.117              "impl_name": "posix"
00:22:12.117            }
00:22:12.117          },
00:22:12.117          {
00:22:12.117            "method": "sock_impl_set_options",
00:22:12.117            "params": {
00:22:12.117              "enable_ktls": false,
00:22:12.117              "enable_placement_id": 0,
00:22:12.117              "enable_quickack": false,
00:22:12.117              "enable_recv_pipe": true,
00:22:12.117              "enable_zerocopy_send_client": false,
00:22:12.117              "enable_zerocopy_send_server": true,
00:22:12.117              "impl_name": "ssl",
00:22:12.117              "recv_buf_size": 4096,
00:22:12.117              "send_buf_size": 4096,
00:22:12.117              "tls_version": 0,
00:22:12.117              "zerocopy_threshold": 0
00:22:12.117            }
00:22:12.117          },
00:22:12.117          {
00:22:12.117            "method": "sock_impl_set_options",
00:22:12.117            "params": {
00:22:12.117              "enable_ktls": false,
00:22:12.117              "enable_placement_id": 0,
00:22:12.117              "enable_quickack": false,
00:22:12.117              "enable_recv_pipe": true,
00:22:12.117              "enable_zerocopy_send_client": false,
00:22:12.117              "enable_zerocopy_send_server": true,
00:22:12.117              "impl_name": "posix",
00:22:12.117              "recv_buf_size": 2097152,
00:22:12.117              "send_buf_size": 2097152,
00:22:12.117              "tls_version": 0,
00:22:12.117              "zerocopy_threshold": 0
00:22:12.117            }
00:22:12.117          }
00:22:12.117        ]
00:22:12.117      },
00:22:12.117      {
00:22:12.117        "subsystem": "vmd",
00:22:12.117        "config": []
00:22:12.117      },
00:22:12.117      {
00:22:12.117        "subsystem": "accel",
00:22:12.117        "config": [
00:22:12.117          {
00:22:12.117            "method": "accel_set_options",
00:22:12.117            "params": {
00:22:12.117              "buf_count": 2048,
00:22:12.117              "large_cache_size": 16,
00:22:12.117              "sequence_count": 2048,
00:22:12.117              "small_cache_size": 128,
00:22:12.117              "task_count": 2048
00:22:12.117            }
00:22:12.117          }
00:22:12.117        ]
00:22:12.117      },
00:22:12.117      {
00:22:12.117        "subsystem": "bdev",
00:22:12.117        "config": [
00:22:12.117          {
00:22:12.117            "method": "bdev_set_options",
00:22:12.117            "params": {
00:22:12.117              "bdev_auto_examine": true,
00:22:12.117              "bdev_io_cache_size": 256,
00:22:12.117              "bdev_io_pool_size": 65535,
00:22:12.117              "iobuf_large_cache_size": 16,
00:22:12.117              "iobuf_small_cache_size": 128
00:22:12.117            }
00:22:12.117          },
00:22:12.117          {
00:22:12.117            "method": "bdev_raid_set_options",
00:22:12.117            "params": {
00:22:12.117              "process_max_bandwidth_mb_sec": 0,
00:22:12.117              "process_window_size_kb": 1024
00:22:12.117            }
00:22:12.117          },
00:22:12.117          {
00:22:12.117            "method": "bdev_iscsi_set_options",
00:22:12.117            "params": {
00:22:12.117              "timeout_sec": 30
00:22:12.117            }
00:22:12.117          },
00:22:12.117          {
00:22:12.117            "method": "bdev_nvme_set_options",
00:22:12.117            "params": {
00:22:12.117              "action_on_timeout": "none",
00:22:12.117              "allow_accel_sequence": false,
00:22:12.117              "arbitration_burst": 0,
00:22:12.117              "bdev_retry_count": 3,
00:22:12.117              "ctrlr_loss_timeout_sec": 0,
00:22:12.117              "delay_cmd_submit": true,
00:22:12.117              "dhchap_dhgroups": [
00:22:12.117                "null",
00:22:12.117                "ffdhe2048",
00:22:12.117                "ffdhe3072",
00:22:12.117                "ffdhe4096",
00:22:12.117                "ffdhe6144",
00:22:12.117                "ffdhe8192"
00:22:12.117              ],
00:22:12.117              "dhchap_digests": [
00:22:12.117                "sha256",
00:22:12.117                "sha384",
00:22:12.117                "sha512"
00:22:12.117              ],
00:22:12.117              "disable_auto_failback": false,
00:22:12.117              "fast_io_fail_timeout_sec": 0,
00:22:12.117              "generate_uuids": false,
00:22:12.117              "high_priority_weight": 0,
00:22:12.117              "io_path_stat": false,
00:22:12.117              "io_queue_requests": 512,
00:22:12.117              "keep_alive_timeout_ms": 10000,
00:22:12.117              "low_priority_weight": 0,
00:22:12.117              "medium_priority_weight": 0,
00:22:12.117              "nvme_adminq_poll_period_us": 10000,
00:22:12.117              "nvme_error_stat": false,
00:22:12.117              "nvme_ioq_poll_period_us": 0,
00:22:12.117              "rdma_cm_event_timeout_ms": 0,
00:22:12.117              "rdma_max_cq_size": 0,
00:22:12.117              "rdma_srq_size": 0,
00:22:12.117              "rdma_umr_per_io": false,
00:22:12.117              "reconnect_delay_sec": 0,
00:22:12.117              "timeout_admin_us": 0,
00:22:12.117              "timeout_us": 0,
00:22:12.117              "transport_ack_timeout": 0,
00:22:12.117              "transport_retry_count": 4,
00:22:12.117              "transport_tos": 0
00:22:12.117            }
00:22:12.117          },
00:22:12.117          {
00:22:12.117            "method": "bdev_nvme_attach_controller",
00:22:12.117            "params": {
00:22:12.117              "adrfam": "IPv4",
00:22:12.117              "ctrlr_loss_timeout_sec": 0,
00:22:12.117              "ddgst": false,
00:22:12.117              "fast_io_fail_timeout_sec": 0,
00:22:12.117              "hdgst": false,
00:22:12.117              "hostnqn": "nqn.2016-06.io.spdk:host1",
00:22:12.117              "multipath": "multipath",
00:22:12.117              "name": "TLSTEST",
00:22:12.117              "prchk_guard": false,
00:22:12.117              "prchk_reftag": false,
00:22:12.117              "psk": "key0",
00:22:12.117              "reconnect_delay_sec": 0,
00:22:12.117              "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:22:12.117              "traddr": "10.0.0.3",
00:22:12.117              "trsvcid": "4420",
00:22:12.117              "trtype": "TCP"
00:22:12.118            }
00:22:12.118          },
00:22:12.118          {
00:22:12.118            "method": "bdev_nvme_set_hotplug",
00:22:12.118            "params": {
00:22:12.118              "enable": false,
00:22:12.118              "period_us": 100000
00:22:12.118            }
00:22:12.118          },
00:22:12.118          {
00:22:12.118            "method": "bdev_wait_for_examine"
00:22:12.118          }
00:22:12.118        ]
00:22:12.118      },
00:22:12.118      {
00:22:12.118        "subsystem": "nbd",
00:22:12.118        "config": []
00:22:12.118      }
00:22:12.118    ]
00:22:12.118  }'
00:22:12.118   19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 102753
00:22:12.118   19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 102753 ']'
00:22:12.118   19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 102753
00:22:12.118    19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:22:12.118   19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:12.118    19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102753
00:22:12.118   19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:22:12.118   19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:22:12.118  killing process with pid 102753
00:22:12.118   19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102753'
00:22:12.118   19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 102753
00:22:12.118  Received shutdown signal, test time was about 10.000000 seconds
00:22:12.118  
00:22:12.118                                                                                                  Latency(us)
00:22:12.118  
[2024-12-13T19:06:43.942Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:12.118  
[2024-12-13T19:06:43.942Z]  ===================================================================================================================
00:22:12.118  
[2024-12-13T19:06:43.942Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:22:12.118   19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 102753
00:22:12.377   19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 102658
00:22:12.377   19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 102658 ']'
00:22:12.377   19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 102658
00:22:12.377    19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:22:12.377   19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:12.377    19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102658
00:22:12.377   19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:22:12.377   19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:22:12.377  killing process with pid 102658
00:22:12.377   19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102658'
00:22:12.377   19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 102658
00:22:12.377   19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 102658
00:22:12.377   19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62
00:22:12.377    19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{
00:22:12.377    "subsystems": [
00:22:12.377      {
00:22:12.377        "subsystem": "keyring",
00:22:12.377        "config": [
00:22:12.377          {
00:22:12.377            "method": "keyring_file_add_key",
00:22:12.377            "params": {
00:22:12.377              "name": "key0",
00:22:12.377              "path": "/tmp/tmp.JAqPPeMAgY"
00:22:12.377            }
00:22:12.377          }
00:22:12.377        ]
00:22:12.377      },
00:22:12.377      {
00:22:12.377        "subsystem": "iobuf",
00:22:12.377        "config": [
00:22:12.377          {
00:22:12.377            "method": "iobuf_set_options",
00:22:12.377            "params": {
00:22:12.377              "enable_numa": false,
00:22:12.377              "large_bufsize": 135168,
00:22:12.377              "large_pool_count": 1024,
00:22:12.377              "small_bufsize": 8192,
00:22:12.377              "small_pool_count": 8192
00:22:12.377            }
00:22:12.377          }
00:22:12.377        ]
00:22:12.377      },
00:22:12.377      {
00:22:12.377        "subsystem": "sock",
00:22:12.377        "config": [
00:22:12.377          {
00:22:12.377            "method": "sock_set_default_impl",
00:22:12.377            "params": {
00:22:12.377              "impl_name": "posix"
00:22:12.377            }
00:22:12.377          },
00:22:12.378          {
00:22:12.378            "method": "sock_impl_set_options",
00:22:12.378            "params": {
00:22:12.378              "enable_ktls": false,
00:22:12.378              "enable_placement_id": 0,
00:22:12.378              "enable_quickack": false,
00:22:12.378              "enable_recv_pipe": true,
00:22:12.378              "enable_zerocopy_send_client": false,
00:22:12.378              "enable_zerocopy_send_server": true,
00:22:12.378              "impl_name": "ssl",
00:22:12.378              "recv_buf_size": 4096,
00:22:12.378              "send_buf_size": 4096,
00:22:12.378              "tls_version": 0,
00:22:12.378              "zerocopy_threshold": 0
00:22:12.378            }
00:22:12.378          },
00:22:12.378          {
00:22:12.378            "method": "sock_impl_set_options",
00:22:12.378            "params": {
00:22:12.378              "enable_ktls": false,
00:22:12.378              "enable_placement_id": 0,
00:22:12.378              "enable_quickack": false,
00:22:12.378              "enable_recv_pipe": true,
00:22:12.378              "enable_zerocopy_send_client": false,
00:22:12.378              "enable_zerocopy_send_server": true,
00:22:12.378              "impl_name": "posix",
00:22:12.378              "recv_buf_size": 2097152,
00:22:12.378              "send_buf_size": 2097152,
00:22:12.378              "tls_version": 0,
00:22:12.378              "zerocopy_threshold": 0
00:22:12.378            }
00:22:12.378          }
00:22:12.378        ]
00:22:12.378      },
00:22:12.378      {
00:22:12.378        "subsystem": "vmd",
00:22:12.378        "config": []
00:22:12.378      },
00:22:12.378      {
00:22:12.378        "subsystem": "accel",
00:22:12.378        "config": [
00:22:12.378          {
00:22:12.378            "method": "accel_set_options",
00:22:12.378            "params": {
00:22:12.378              "buf_count": 2048,
00:22:12.378              "large_cache_size": 16,
00:22:12.378              "sequence_count": 2048,
00:22:12.378              "small_cache_size": 128,
00:22:12.378              "task_count": 2048
00:22:12.378            }
00:22:12.378          }
00:22:12.378        ]
00:22:12.378      },
00:22:12.378      {
00:22:12.378        "subsystem": "bdev",
00:22:12.378        "config": [
00:22:12.378          {
00:22:12.378            "method": "bdev_set_options",
00:22:12.378            "params": {
00:22:12.378              "bdev_auto_examine": true,
00:22:12.378              "bdev_io_cache_size": 256,
00:22:12.378              "bdev_io_pool_size": 65535,
00:22:12.378              "iobuf_large_cache_size": 16,
00:22:12.378              "iobuf_small_cache_size": 128
00:22:12.378            }
00:22:12.378          },
00:22:12.378          {
00:22:12.378            "method": "bdev_raid_set_options",
00:22:12.378            "params": {
00:22:12.378              "process_max_bandwidth_mb_sec": 0,
00:22:12.378              "process_window_size_kb": 1024
00:22:12.378            }
00:22:12.378          },
00:22:12.378          {
00:22:12.378            "method": "bdev_iscsi_set_options",
00:22:12.378            "params": {
00:22:12.378              "timeout_sec": 30
00:22:12.378            }
00:22:12.378          },
00:22:12.378          {
00:22:12.378            "method": "bdev_nvme_set_options",
00:22:12.378            "params": {
00:22:12.378              "action_on_timeout": "none",
00:22:12.378              "allow_accel_sequence": false,
00:22:12.378              "arbitration_burst": 0,
00:22:12.378              "bdev_retry_count": 3,
00:22:12.378              "ctrlr_loss_timeout_sec": 0,
00:22:12.378              "delay_cmd_submit": true,
00:22:12.378              "dhchap_dhgroups": [
00:22:12.378                "null",
00:22:12.378                "ffdhe2048",
00:22:12.378                "ffdhe3072",
00:22:12.378                "ffdhe4096",
00:22:12.378                "ffdhe6144",
00:22:12.378                "ffdhe8192"
00:22:12.378              ],
00:22:12.378              "dhchap_digests": [
00:22:12.378                "sha256",
00:22:12.378                "sha384",
00:22:12.378                "sha512"
00:22:12.378              ],
00:22:12.378              "disable_auto_failback": false,
00:22:12.378              "fast_io_fail_timeout_sec": 0,
00:22:12.378              "generate_uuids": false,
00:22:12.378              "high_priority_weight": 0,
00:22:12.378              "io_path_stat": false,
00:22:12.378              "io_queue_requests": 0,
00:22:12.378              "keep_alive_timeout_ms": 10000,
00:22:12.378              "low_priority_weight": 0,
00:22:12.378              "medium_priority_weight": 0,
00:22:12.378              "nvme_adminq_poll_period_us": 10000,
00:22:12.378              "nvme_error_stat": false,
00:22:12.378              "nvme_ioq_poll_period_us": 0,
00:22:12.378              "rdma_cm_event_timeout_ms": 0,
00:22:12.378              "rdma_max_cq_size": 0,
00:22:12.378              "rdma_srq_size": 0,
00:22:12.378              "rdma_umr_per_io": false,
00:22:12.378              "reconnect_delay_sec": 0,
00:22:12.378              "timeout_admin_us": 0,
00:22:12.378              "timeout_us": 0,
00:22:12.378              "transport_ack_timeout": 0,
00:22:12.378              "transport_retry_count": 4,
00:22:12.378              "transport_tos": 0
00:22:12.378            }
00:22:12.378          },
00:22:12.378          {
00:22:12.378            "method": "bdev_nvme_set_hotplug",
00:22:12.378            "params": {
00:22:12.378              "enable": false,
00:22:12.378              "period_us": 100000
00:22:12.378            }
00:22:12.378          },
00:22:12.378          {
00:22:12.378            "method": "bdev_malloc_create",
00:22:12.378            "params": {
00:22:12.378              "block_size": 4096,
00:22:12.378              "dif_is_head_of_md": false,
00:22:12.378              "dif_pi_format": 0,
00:22:12.378              "dif_type": 0,
00:22:12.378              "md_size": 0,
00:22:12.378              "name": "malloc0",
00:22:12.378              "num_blocks": 8192,
00:22:12.378              "optimal_io_boundary": 0,
00:22:12.378              "physical_block_size": 4096,
00:22:12.378              "uuid": "0d29430e-9993-4a4d-9250-2f440b6fff04"
00:22:12.378            }
00:22:12.378          },
00:22:12.378          {
00:22:12.378            "method": "bdev_wait_for_examine"
00:22:12.378          }
00:22:12.378        ]
00:22:12.378      },
00:22:12.378      {
00:22:12.378        "subsystem": "nbd",
00:22:12.378        "config": []
00:22:12.378      },
00:22:12.378      {
00:22:12.378        "subsystem": "scheduler",
00:22:12.378        "config": [
00:22:12.378          {
00:22:12.378            "method": "framework_set_scheduler",
00:22:12.378            "params": {
00:22:12.378              "name": "static"
00:22:12.378            }
00:22:12.378          }
00:22:12.378        ]
00:22:12.378      },
00:22:12.378      {
00:22:12.378        "subsystem": "nvmf",
00:22:12.378        "config": [
00:22:12.378          {
00:22:12.378            "method": "nvmf_set_config",
00:22:12.378            "params": {
00:22:12.378              "admin_cmd_passthru": {
00:22:12.378                "identify_ctrlr": false
00:22:12.378              },
00:22:12.378              "dhchap_dhgroups": [
00:22:12.378                "null",
00:22:12.378                "ffdhe2048",
00:22:12.378                "ffdhe3072",
00:22:12.378                "ffdhe4096",
00:22:12.378                "ffdhe6144",
00:22:12.378                "ffdhe8192"
00:22:12.378              ],
00:22:12.378              "dhchap_digests": [
00:22:12.378                "sha256",
00:22:12.378                "sha384",
00:22:12.378                "sha512"
00:22:12.378              ],
00:22:12.378              "discovery_filter": "match_any"
00:22:12.378            }
00:22:12.378          },
00:22:12.378          {
00:22:12.378            "method": "nvmf_set_max_subsystems",
00:22:12.378            "params": {
00:22:12.378              "max_subsystems": 1024
00:22:12.378            }
00:22:12.378          },
00:22:12.378          {
00:22:12.378            "method": "nvmf_set_crdt",
00:22:12.379            "params": {
00:22:12.379              "crdt1": 0,
00:22:12.379              "crdt2": 0,
00:22:12.379              "crdt3": 0
00:22:12.379            }
00:22:12.379          },
00:22:12.379          {
00:22:12.379            "method": "nvmf_create_transport",
00:22:12.379            "params": {
00:22:12.379              "abort_timeout_sec": 1,
00:22:12.379              "ack_timeout": 0,
00:22:12.379              "buf_cache_size": 4294967295,
00:22:12.379              "c2h_success": false,
00:22:12.379              "data_wr_pool_size": 0,
00:22:12.379              "dif_insert_or_strip": false,
00:22:12.379              "in_capsule_data_size": 4096,
00:22:12.379              "io_unit_size": 131072,
00:22:12.379              "max_aq_depth": 128,
00:22:12.379              "max_io_qpairs_per_ctrlr": 127,
00:22:12.379              "max_io_size": 131072,
00:22:12.379              "max_queue_depth": 128,
00:22:12.379              "num_shared_buffers": 511,
00:22:12.379              "sock_priority": 0,
00:22:12.379              "trtype": "TCP",
00:22:12.379              "zcopy": false
00:22:12.379            }
00:22:12.379          },
00:22:12.379          {
00:22:12.379            "method": "nvmf_create_subsystem",
00:22:12.379            "params": {
00:22:12.379              "allow_any_host": false,
00:22:12.379              "ana_reporting": false,
00:22:12.379              "max_cntlid": 65519,
00:22:12.379              "max_namespaces": 10,
00:22:12.379              "min_cntlid": 1,
00:22:12.379              "model_number": "SPDK bdev Controller",
00:22:12.379              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:22:12.379              "serial_number": "SPDK00000000000001"
00:22:12.379            }
00:22:12.379          },
00:22:12.379          {
00:22:12.379            "method": "nvmf_subsystem_add_host",
00:22:12.379            "params": {
00:22:12.379              "host": "nqn.2016-06.io.spdk:host1",
00:22:12.379              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:22:12.379              "psk": "key0"
00:22:12.379            }
00:22:12.379          },
00:22:12.379          {
00:22:12.379            "method": "nvmf_subsystem_add_ns",
00:22:12.379            "params": {
00:22:12.379              "namespace": {
00:22:12.379                "bdev_name": "malloc0",
00:22:12.379                "nguid": "0D29430E99934A4D92502F440B6FFF04",
00:22:12.379                "no_auto_visible": false,
00:22:12.379                "nsid": 1,
00:22:12.379                "uuid": "0d29430e-9993-4a4d-9250-2f440b6fff04"
00:22:12.379              },
00:22:12.379              "nqn": "nqn.2016-06.io.spdk:cnode1"
00:22:12.379            }
00:22:12.379          },
00:22:12.379          {
00:22:12.379            "method": "nvmf_subsystem_add_listener",
00:22:12.379            "params": {
00:22:12.379              "listen_address": {
00:22:12.379                "adrfam": "IPv4",
00:22:12.379                "traddr": "10.0.0.3",
00:22:12.379                "trsvcid": "4420",
00:22:12.379                "trtype": "TCP"
00:22:12.379              },
00:22:12.379              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:22:12.379              "secure_channel": true
00:22:12.379            }
00:22:12.379          }
00:22:12.379        ]
00:22:12.379      }
00:22:12.379    ]
00:22:12.379  }'
00:22:12.379   19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:22:12.379   19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:22:12.379   19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:12.379   19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=102821
00:22:12.379   19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62
00:22:12.379   19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 102821
00:22:12.379   19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 102821 ']'
00:22:12.379   19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:12.379   19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:12.379  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:22:12.379   19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:12.379   19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:12.379   19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:12.638  [2024-12-13 19:06:44.257911] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:22:12.638  [2024-12-13 19:06:44.258012] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:22:12.638  [2024-12-13 19:06:44.405994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:12.638  [2024-12-13 19:06:44.446620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:22:12.638  [2024-12-13 19:06:44.446696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:22:12.638  [2024-12-13 19:06:44.446724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:22:12.638  [2024-12-13 19:06:44.446732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:22:12.638  [2024-12-13 19:06:44.446739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:22:12.638  [2024-12-13 19:06:44.447185] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:22:12.898  [2024-12-13 19:06:44.682363] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:22:12.898  [2024-12-13 19:06:44.714308] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:22:12.898  [2024-12-13 19:06:44.714557] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:22:13.468   19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:13.468   19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:22:13.468   19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:22:13.468   19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:22:13.468   19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:13.468   19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:22:13.468   19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=102865
00:22:13.468   19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 102865 /var/tmp/bdevperf.sock
00:22:13.468   19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 102865 ']'
00:22:13.468   19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:22:13.468   19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:13.469  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:22:13.469   19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:22:13.469   19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:13.469   19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:13.469   19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63
00:22:13.469    19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{
00:22:13.469    "subsystems": [
00:22:13.469      {
00:22:13.469        "subsystem": "keyring",
00:22:13.469        "config": [
00:22:13.469          {
00:22:13.469            "method": "keyring_file_add_key",
00:22:13.469            "params": {
00:22:13.469              "name": "key0",
00:22:13.469              "path": "/tmp/tmp.JAqPPeMAgY"
00:22:13.469            }
00:22:13.469          }
00:22:13.469        ]
00:22:13.469      },
00:22:13.469      {
00:22:13.469        "subsystem": "iobuf",
00:22:13.469        "config": [
00:22:13.469          {
00:22:13.469            "method": "iobuf_set_options",
00:22:13.469            "params": {
00:22:13.469              "enable_numa": false,
00:22:13.469              "large_bufsize": 135168,
00:22:13.469              "large_pool_count": 1024,
00:22:13.469              "small_bufsize": 8192,
00:22:13.469              "small_pool_count": 8192
00:22:13.469            }
00:22:13.469          }
00:22:13.469        ]
00:22:13.469      },
00:22:13.469      {
00:22:13.469        "subsystem": "sock",
00:22:13.469        "config": [
00:22:13.469          {
00:22:13.469            "method": "sock_set_default_impl",
00:22:13.469            "params": {
00:22:13.469              "impl_name": "posix"
00:22:13.469            }
00:22:13.469          },
00:22:13.469          {
00:22:13.469            "method": "sock_impl_set_options",
00:22:13.469            "params": {
00:22:13.469              "enable_ktls": false,
00:22:13.469              "enable_placement_id": 0,
00:22:13.469              "enable_quickack": false,
00:22:13.469              "enable_recv_pipe": true,
00:22:13.469              "enable_zerocopy_send_client": false,
00:22:13.469              "enable_zerocopy_send_server": true,
00:22:13.469              "impl_name": "ssl",
00:22:13.469              "recv_buf_size": 4096,
00:22:13.469              "send_buf_size": 4096,
00:22:13.469              "tls_version": 0,
00:22:13.469              "zerocopy_threshold": 0
00:22:13.469            }
00:22:13.469          },
00:22:13.469          {
00:22:13.469            "method": "sock_impl_set_options",
00:22:13.469            "params": {
00:22:13.469              "enable_ktls": false,
00:22:13.469              "enable_placement_id": 0,
00:22:13.469              "enable_quickack": false,
00:22:13.469              "enable_recv_pipe": true,
00:22:13.469              "enable_zerocopy_send_client": false,
00:22:13.469              "enable_zerocopy_send_server": true,
00:22:13.469              "impl_name": "posix",
00:22:13.469              "recv_buf_size": 2097152,
00:22:13.469              "send_buf_size": 2097152,
00:22:13.469              "tls_version": 0,
00:22:13.469              "zerocopy_threshold": 0
00:22:13.469            }
00:22:13.469          }
00:22:13.469        ]
00:22:13.469      },
00:22:13.469      {
00:22:13.469        "subsystem": "vmd",
00:22:13.470        "config": []
00:22:13.470      },
00:22:13.470      {
00:22:13.470        "subsystem": "accel",
00:22:13.470        "config": [
00:22:13.470          {
00:22:13.470            "method": "accel_set_options",
00:22:13.470            "params": {
00:22:13.470              "buf_count": 2048,
00:22:13.470              "large_cache_size": 16,
00:22:13.470              "sequence_count": 2048,
00:22:13.470              "small_cache_size": 128,
00:22:13.470              "task_count": 2048
00:22:13.470            }
00:22:13.470          }
00:22:13.470        ]
00:22:13.470      },
00:22:13.470      {
00:22:13.470        "subsystem": "bdev",
00:22:13.470        "config": [
00:22:13.470          {
00:22:13.470            "method": "bdev_set_options",
00:22:13.470            "params": {
00:22:13.470              "bdev_auto_examine": true,
00:22:13.470              "bdev_io_cache_size": 256,
00:22:13.470              "bdev_io_pool_size": 65535,
00:22:13.470              "iobuf_large_cache_size": 16,
00:22:13.470              "iobuf_small_cache_size": 128
00:22:13.470            }
00:22:13.470          },
00:22:13.470          {
00:22:13.470            "method": "bdev_raid_set_options",
00:22:13.470            "params": {
00:22:13.470              "process_max_bandwidth_mb_sec": 0,
00:22:13.470              "process_window_size_kb": 1024
00:22:13.470            }
00:22:13.470          },
00:22:13.470          {
00:22:13.470            "method": "bdev_iscsi_set_options",
00:22:13.470            "params": {
00:22:13.470              "timeout_sec": 30
00:22:13.470            }
00:22:13.470          },
00:22:13.470          {
00:22:13.470            "method": "bdev_nvme_set_options",
00:22:13.470            "params": {
00:22:13.470              "action_on_timeout": "none",
00:22:13.470              "allow_accel_sequence": false,
00:22:13.470              "arbitration_burst": 0,
00:22:13.470              "bdev_retry_count": 3,
00:22:13.470              "ctrlr_loss_timeout_sec": 0,
00:22:13.470              "delay_cmd_submit": true,
00:22:13.470              "dhchap_dhgroups": [
00:22:13.470                "null",
00:22:13.470                "ffdhe2048",
00:22:13.470                "ffdhe3072",
00:22:13.470                "ffdhe4096",
00:22:13.470                "ffdhe6144",
00:22:13.470                "ffdhe8192"
00:22:13.470              ],
00:22:13.470              "dhchap_digests": [
00:22:13.470                "sha256",
00:22:13.470                "sha384",
00:22:13.470                "sha512"
00:22:13.470              ],
00:22:13.470              "disable_auto_failback": false,
00:22:13.470              "fast_io_fail_timeout_sec": 0,
00:22:13.470              "generate_uuids": false,
00:22:13.470              "high_priority_weight": 0,
00:22:13.470              "io_path_stat": false,
00:22:13.470              "io_queue_requests": 512,
00:22:13.470              "keep_alive_timeout_ms": 10000,
00:22:13.470              "low_priority_weight": 0,
00:22:13.470              "medium_priority_weight": 0,
00:22:13.470              "nvme_adminq_poll_period_us": 10000,
00:22:13.470              "nvme_error_stat": false,
00:22:13.470              "nvme_ioq_poll_period_us": 0,
00:22:13.470              "rdma_cm_event_timeout_ms": 0,
00:22:13.471              "rdma_max_cq_size": 0,
00:22:13.471              "rdma_srq_size": 0,
00:22:13.471              "rdma_umr_per_io": false,
00:22:13.471              "reconnect_delay_sec": 0,
00:22:13.471              "timeout_admin_us": 0,
00:22:13.471              "timeout_us": 0,
00:22:13.471              "transport_ack_timeout": 0,
00:22:13.471              "transport_retry_count": 4,
00:22:13.471              "transport_tos": 0
00:22:13.471            }
00:22:13.471          },
00:22:13.471          {
00:22:13.471            "method": "bdev_nvme_attach_controller",
00:22:13.471            "params": {
00:22:13.471              "adrfam": "IPv4",
00:22:13.471              "ctrlr_loss_timeout_sec": 0,
00:22:13.471              "ddgst": false,
00:22:13.471              "fast_io_fail_timeout_sec": 0,
00:22:13.471              "hdgst": false,
00:22:13.471              "hostnqn": "nqn.2016-06.io.spdk:host1",
00:22:13.471              "multipath": "multipath",
00:22:13.471              "name": "TLSTEST",
00:22:13.471              "prchk_guard": false,
00:22:13.471              "prchk_reftag": false,
00:22:13.471              "psk": "key0",
00:22:13.471              "reconnect_delay_sec": 0,
00:22:13.471              "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:22:13.471              "traddr": "10.0.0.3",
00:22:13.471              "trsvcid": "4420",
00:22:13.471              "trtype": "TCP"
00:22:13.471            }
00:22:13.471          },
00:22:13.471          {
00:22:13.471            "method": "bdev_nvme_set_hotplug",
00:22:13.471            "params": {
00:22:13.471              "enable": false,
00:22:13.471              "period_us": 100000
00:22:13.471            }
00:22:13.471          },
00:22:13.471          {
00:22:13.471            "method": "bdev_wait_for_examine"
00:22:13.471          }
00:22:13.471        ]
00:22:13.471      },
00:22:13.471      {
00:22:13.471        "subsystem": "nbd",
00:22:13.471        "config": []
00:22:13.471      }
00:22:13.471    ]
00:22:13.471  }'
00:22:13.733  [2024-12-13 19:06:45.339045] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:22:13.733  [2024-12-13 19:06:45.339180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102865 ]
00:22:13.733  [2024-12-13 19:06:45.492736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:13.733  [2024-12-13 19:06:45.531087] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:22:13.992  [2024-12-13 19:06:45.710393] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:22:14.560   19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:14.560   19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:22:14.560   19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests
00:22:14.819  Running I/O for 10 seconds...
00:22:16.693       4798.00 IOPS,    18.74 MiB/s
[2024-12-13T19:06:49.454Z]      4856.50 IOPS,    18.97 MiB/s
[2024-12-13T19:06:50.830Z]      4870.00 IOPS,    19.02 MiB/s
[2024-12-13T19:06:51.767Z]      4883.75 IOPS,    19.08 MiB/s
[2024-12-13T19:06:52.703Z]      4902.00 IOPS,    19.15 MiB/s
[2024-12-13T19:06:53.639Z]      4904.33 IOPS,    19.16 MiB/s
[2024-12-13T19:06:54.575Z]      4907.43 IOPS,    19.17 MiB/s
[2024-12-13T19:06:55.510Z]      4910.38 IOPS,    19.18 MiB/s
[2024-12-13T19:06:56.885Z]      4912.44 IOPS,    19.19 MiB/s
[2024-12-13T19:06:56.885Z]      4916.20 IOPS,    19.20 MiB/s
00:22:25.061                                                                                                  Latency(us)
00:22:25.061  
[2024-12-13T19:06:56.885Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:25.061  Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:22:25.061  	 Verification LBA range: start 0x0 length 0x2000
00:22:25.061  	 TLSTESTn1           :      10.01    4921.78      19.23       0.00     0.00   25961.12    5213.09   21090.68
00:22:25.061  
[2024-12-13T19:06:56.885Z]  ===================================================================================================================
00:22:25.061  
[2024-12-13T19:06:56.885Z]  Total                       :               4921.78      19.23       0.00     0.00   25961.12    5213.09   21090.68
00:22:25.061  {
00:22:25.061    "results": [
00:22:25.061      {
00:22:25.061        "job": "TLSTESTn1",
00:22:25.061        "core_mask": "0x4",
00:22:25.061        "workload": "verify",
00:22:25.061        "status": "finished",
00:22:25.061        "verify_range": {
00:22:25.061          "start": 0,
00:22:25.061          "length": 8192
00:22:25.061        },
00:22:25.061        "queue_depth": 128,
00:22:25.061        "io_size": 4096,
00:22:25.061        "runtime": 10.014257,
00:22:25.061        "iops": 4921.7830139570015,
00:22:25.061        "mibps": 19.225714898269537,
00:22:25.061        "io_failed": 0,
00:22:25.061        "io_timeout": 0,
00:22:25.061        "avg_latency_us": 25961.121107258266,
00:22:25.061        "min_latency_us": 5213.090909090909,
00:22:25.061        "max_latency_us": 21090.676363636365
00:22:25.061      }
00:22:25.061    ],
00:22:25.061    "core_count": 1
00:22:25.061  }
00:22:25.061   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:22:25.061   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 102865
00:22:25.061   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 102865 ']'
00:22:25.061   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 102865
00:22:25.061    19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:22:25.061   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:25.061    19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102865
00:22:25.061   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:22:25.061   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:22:25.061  killing process with pid 102865
00:22:25.061   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102865'
00:22:25.061   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 102865
00:22:25.061  Received shutdown signal, test time was about 10.000000 seconds
00:22:25.061  
00:22:25.061                                                                                                  Latency(us)
00:22:25.061  
[2024-12-13T19:06:56.885Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:25.061  
[2024-12-13T19:06:56.885Z]  ===================================================================================================================
00:22:25.061  
[2024-12-13T19:06:56.885Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:22:25.061   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 102865
00:22:25.061   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 102821
00:22:25.061   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 102821 ']'
00:22:25.061   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 102821
00:22:25.061    19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:22:25.061   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:25.061    19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102821
00:22:25.061   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:22:25.061   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:22:25.061  killing process with pid 102821
00:22:25.061   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102821'
00:22:25.061   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 102821
00:22:25.061   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 102821
00:22:25.319   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart
00:22:25.319   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:22:25.320   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:22:25.320   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:25.320   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=103018
00:22:25.320   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF
00:22:25.320   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 103018
00:22:25.320   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 103018 ']'
00:22:25.320   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:25.320   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:25.320  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:22:25.320   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:25.320   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:25.320   19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:25.320  [2024-12-13 19:06:56.988423] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:22:25.320  [2024-12-13 19:06:56.988541] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:22:25.578  [2024-12-13 19:06:57.142977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:25.578  [2024-12-13 19:06:57.178329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:22:25.578  [2024-12-13 19:06:57.178409] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:22:25.578  [2024-12-13 19:06:57.178423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:22:25.578  [2024-12-13 19:06:57.178434] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:22:25.578  [2024-12-13 19:06:57.178443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:22:25.578  [2024-12-13 19:06:57.178882] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:22:26.145   19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:26.145   19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:22:26.145   19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:22:26.145   19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:22:26.145   19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:26.403   19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:22:26.404   19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.JAqPPeMAgY
00:22:26.404   19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JAqPPeMAgY
00:22:26.404   19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o
00:22:26.662  [2024-12-13 19:06:58.248879] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:22:26.662   19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10
00:22:26.920   19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k
00:22:26.920  [2024-12-13 19:06:58.692919] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:22:26.920  [2024-12-13 19:06:58.693157] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:22:26.920   19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0
00:22:27.179  malloc0
00:22:27.437   19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:22:27.696   19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JAqPPeMAgY
00:22:27.696   19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0
00:22:27.954   19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=103122
00:22:27.954   19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1
00:22:27.954   19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:22:27.954   19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 103122 /var/tmp/bdevperf.sock
00:22:27.954   19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 103122 ']'
00:22:27.954   19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:22:27.954   19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:27.954   19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:22:27.954  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:22:27.954   19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:27.954   19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:27.954  [2024-12-13 19:06:59.762606] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:22:27.954  [2024-12-13 19:06:59.762722] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103122 ]
00:22:28.213  [2024-12-13 19:06:59.905981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:28.213  [2024-12-13 19:06:59.946042] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:22:28.471   19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:28.471   19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:22:28.471   19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JAqPPeMAgY
00:22:28.729   19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1
00:22:28.729  [2024-12-13 19:07:00.518722] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:22:28.988  nvme0n1
00:22:28.988   19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:22:28.988  Running I/O for 1 seconds...
00:22:29.924       4744.00 IOPS,    18.53 MiB/s
00:22:29.924                                                                                                  Latency(us)
00:22:29.924  
[2024-12-13T19:07:01.748Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:29.924  Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:22:29.924  	 Verification LBA range: start 0x0 length 0x2000
00:22:29.924  	 nvme0n1             :       1.01    4803.94      18.77       0.00     0.00   26384.42     741.00   16801.05
00:22:29.924  
[2024-12-13T19:07:01.748Z]  ===================================================================================================================
00:22:29.924  
[2024-12-13T19:07:01.748Z]  Total                       :               4803.94      18.77       0.00     0.00   26384.42     741.00   16801.05
00:22:29.924  {
00:22:29.924    "results": [
00:22:29.924      {
00:22:29.924        "job": "nvme0n1",
00:22:29.924        "core_mask": "0x2",
00:22:29.924        "workload": "verify",
00:22:29.924        "status": "finished",
00:22:29.924        "verify_range": {
00:22:29.924          "start": 0,
00:22:29.924          "length": 8192
00:22:29.924        },
00:22:29.924        "queue_depth": 128,
00:22:29.924        "io_size": 4096,
00:22:29.924        "runtime": 1.014376,
00:22:29.924        "iops": 4803.938578988462,
00:22:29.924        "mibps": 18.76538507417368,
00:22:29.924        "io_failed": 0,
00:22:29.924        "io_timeout": 0,
00:22:29.924        "avg_latency_us": 26384.424537432606,
00:22:29.924        "min_latency_us": 741.0036363636364,
00:22:29.924        "max_latency_us": 16801.04727272727
00:22:29.924      }
00:22:29.924    ],
00:22:29.924    "core_count": 1
00:22:29.924  }
00:22:29.924   19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 103122
00:22:29.924   19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 103122 ']'
00:22:29.924   19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 103122
00:22:29.924    19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:22:29.924   19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:29.924    19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103122
00:22:30.182   19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:22:30.183   19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:22:30.183  killing process with pid 103122
00:22:30.183   19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103122'
00:22:30.183  Received shutdown signal, test time was about 1.000000 seconds
00:22:30.183  
00:22:30.183                                                                                                  Latency(us)
00:22:30.183  
[2024-12-13T19:07:02.007Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:30.183  
[2024-12-13T19:07:02.007Z]  ===================================================================================================================
00:22:30.183  
[2024-12-13T19:07:02.007Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:22:30.183   19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 103122
00:22:30.183   19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 103122
00:22:30.183   19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 103018
00:22:30.183   19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 103018 ']'
00:22:30.183   19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 103018
00:22:30.183    19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:22:30.183   19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:30.183    19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103018
00:22:30.183   19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:22:30.183  killing process with pid 103018
00:22:30.183   19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:22:30.183   19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103018'
00:22:30.183   19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 103018
00:22:30.183   19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 103018
00:22:30.441   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart
00:22:30.441   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:22:30.441   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:22:30.441   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:30.441   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=103184
00:22:30.441   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF
00:22:30.441   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 103184
00:22:30.441   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 103184 ']'
00:22:30.441   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:30.441   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:30.441  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:22:30.441   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:30.442   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:30.442   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:30.442  [2024-12-13 19:07:02.244969] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:22:30.442  [2024-12-13 19:07:02.245072] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:22:30.700  [2024-12-13 19:07:02.396982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:30.700  [2024-12-13 19:07:02.431646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:22:30.700  [2024-12-13 19:07:02.431713] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:22:30.700  [2024-12-13 19:07:02.431723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:22:30.700  [2024-12-13 19:07:02.431731] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:22:30.700  [2024-12-13 19:07:02.431738] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:22:30.700  [2024-12-13 19:07:02.432133] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:22:30.959   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:30.959   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:22:30.959   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:22:30.959   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:22:30.959   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:30.959   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:22:30.959   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd
00:22:30.959   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:30.959   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:30.959  [2024-12-13 19:07:02.609829] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:22:30.959  malloc0
00:22:30.959  [2024-12-13 19:07:02.641106] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:22:30.959  [2024-12-13 19:07:02.641368] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:22:30.959   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:30.959   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=103220
00:22:30.959   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1
00:22:30.959   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 103220 /var/tmp/bdevperf.sock
00:22:30.959   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 103220 ']'
00:22:30.959   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:22:30.959   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:30.959  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:22:30.959   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:22:30.959   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:30.959   19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:30.959  [2024-12-13 19:07:02.737281] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:22:30.959  [2024-12-13 19:07:02.737379] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103220 ]
00:22:31.217  [2024-12-13 19:07:02.886134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:31.217  [2024-12-13 19:07:02.918489] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:22:31.475   19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:31.475   19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:22:31.475   19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JAqPPeMAgY
00:22:31.475   19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1
00:22:32.041  [2024-12-13 19:07:03.571298] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:22:32.041  nvme0n1
00:22:32.041   19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:22:32.041  Running I/O for 1 seconds...
00:22:33.235       4864.00 IOPS,    19.00 MiB/s
00:22:33.235                                                                                                  Latency(us)
00:22:33.235  
[2024-12-13T19:07:05.059Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:33.235  Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:22:33.235  	 Verification LBA range: start 0x0 length 0x2000
00:22:33.235  	 nvme0n1             :       1.02    4871.05      19.03       0.00     0.00   26031.15    8519.68   19660.80
00:22:33.235  
[2024-12-13T19:07:05.059Z]  ===================================================================================================================
00:22:33.235  
[2024-12-13T19:07:05.059Z]  Total                       :               4871.05      19.03       0.00     0.00   26031.15    8519.68   19660.80
00:22:33.235  {
00:22:33.235    "results": [
00:22:33.235      {
00:22:33.235        "job": "nvme0n1",
00:22:33.235        "core_mask": "0x2",
00:22:33.235        "workload": "verify",
00:22:33.235        "status": "finished",
00:22:33.235        "verify_range": {
00:22:33.235          "start": 0,
00:22:33.235          "length": 8192
00:22:33.235        },
00:22:33.235        "queue_depth": 128,
00:22:33.235        "io_size": 4096,
00:22:33.235        "runtime": 1.024831,
00:22:33.235        "iops": 4871.0470311690415,
00:22:33.235        "mibps": 19.02752746550407,
00:22:33.235        "io_failed": 0,
00:22:33.235        "io_timeout": 0,
00:22:33.235        "avg_latency_us": 26031.14815850816,
00:22:33.235        "min_latency_us": 8519.68,
00:22:33.235        "max_latency_us": 19660.8
00:22:33.235      }
00:22:33.235    ],
00:22:33.235    "core_count": 1
00:22:33.235  }
00:22:33.235    19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config
00:22:33.235    19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:33.235    19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:33.235    19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:33.235   19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{
00:22:33.235  "subsystems": [
00:22:33.235  {
00:22:33.235  "subsystem": "keyring",
00:22:33.235  "config": [
00:22:33.235  {
00:22:33.235  "method": "keyring_file_add_key",
00:22:33.235  "params": {
00:22:33.235  "name": "key0",
00:22:33.235  "path": "/tmp/tmp.JAqPPeMAgY"
00:22:33.235  }
00:22:33.235  }
00:22:33.235  ]
00:22:33.235  },
00:22:33.235  {
00:22:33.235  "subsystem": "iobuf",
00:22:33.235  "config": [
00:22:33.235  {
00:22:33.235  "method": "iobuf_set_options",
00:22:33.235  "params": {
00:22:33.235  "enable_numa": false,
00:22:33.235  "large_bufsize": 135168,
00:22:33.235  "large_pool_count": 1024,
00:22:33.235  "small_bufsize": 8192,
00:22:33.235  "small_pool_count": 8192
00:22:33.235  }
00:22:33.235  }
00:22:33.235  ]
00:22:33.235  },
00:22:33.235  {
00:22:33.235  "subsystem": "sock",
00:22:33.235  "config": [
00:22:33.235  {
00:22:33.235  "method": "sock_set_default_impl",
00:22:33.235  "params": {
00:22:33.235  "impl_name": "posix"
00:22:33.235  }
00:22:33.235  },
00:22:33.235  {
00:22:33.235  "method": "sock_impl_set_options",
00:22:33.235  "params": {
00:22:33.235  "enable_ktls": false,
00:22:33.235  "enable_placement_id": 0,
00:22:33.235  "enable_quickack": false,
00:22:33.235  "enable_recv_pipe": true,
00:22:33.235  "enable_zerocopy_send_client": false,
00:22:33.235  "enable_zerocopy_send_server": true,
00:22:33.235  "impl_name": "ssl",
00:22:33.235  "recv_buf_size": 4096,
00:22:33.235  "send_buf_size": 4096,
00:22:33.235  "tls_version": 0,
00:22:33.235  "zerocopy_threshold": 0
00:22:33.235  }
00:22:33.235  },
00:22:33.235  {
00:22:33.235  "method": "sock_impl_set_options",
00:22:33.235  "params": {
00:22:33.235  "enable_ktls": false,
00:22:33.235  "enable_placement_id": 0,
00:22:33.235  "enable_quickack": false,
00:22:33.235  "enable_recv_pipe": true,
00:22:33.235  "enable_zerocopy_send_client": false,
00:22:33.235  "enable_zerocopy_send_server": true,
00:22:33.235  "impl_name": "posix",
00:22:33.235  "recv_buf_size": 2097152,
00:22:33.235  "send_buf_size": 2097152,
00:22:33.235  "tls_version": 0,
00:22:33.235  "zerocopy_threshold": 0
00:22:33.235  }
00:22:33.235  }
00:22:33.235  ]
00:22:33.235  },
00:22:33.235  {
00:22:33.235  "subsystem": "vmd",
00:22:33.235  "config": []
00:22:33.235  },
00:22:33.235  {
00:22:33.235  "subsystem": "accel",
00:22:33.235  "config": [
00:22:33.235  {
00:22:33.235  "method": "accel_set_options",
00:22:33.235  "params": {
00:22:33.235  "buf_count": 2048,
00:22:33.235  "large_cache_size": 16,
00:22:33.235  "sequence_count": 2048,
00:22:33.235  "small_cache_size": 128,
00:22:33.235  "task_count": 2048
00:22:33.235  }
00:22:33.235  }
00:22:33.235  ]
00:22:33.235  },
00:22:33.235  {
00:22:33.235  "subsystem": "bdev",
00:22:33.235  "config": [
00:22:33.235  {
00:22:33.235  "method": "bdev_set_options",
00:22:33.235  "params": {
00:22:33.235  "bdev_auto_examine": true,
00:22:33.235  "bdev_io_cache_size": 256,
00:22:33.235  "bdev_io_pool_size": 65535,
00:22:33.235  "iobuf_large_cache_size": 16,
00:22:33.235  "iobuf_small_cache_size": 128
00:22:33.235  }
00:22:33.235  },
00:22:33.235  {
00:22:33.235  "method": "bdev_raid_set_options",
00:22:33.235  "params": {
00:22:33.235  "process_max_bandwidth_mb_sec": 0,
00:22:33.235  "process_window_size_kb": 1024
00:22:33.235  }
00:22:33.235  },
00:22:33.235  {
00:22:33.235  "method": "bdev_iscsi_set_options",
00:22:33.235  "params": {
00:22:33.235  "timeout_sec": 30
00:22:33.235  }
00:22:33.235  },
00:22:33.235  {
00:22:33.235  "method": "bdev_nvme_set_options",
00:22:33.235  "params": {
00:22:33.235  "action_on_timeout": "none",
00:22:33.235  "allow_accel_sequence": false,
00:22:33.235  "arbitration_burst": 0,
00:22:33.235  "bdev_retry_count": 3,
00:22:33.235  "ctrlr_loss_timeout_sec": 0,
00:22:33.235  "delay_cmd_submit": true,
00:22:33.235  "dhchap_dhgroups": [
00:22:33.235  "null",
00:22:33.235  "ffdhe2048",
00:22:33.235  "ffdhe3072",
00:22:33.235  "ffdhe4096",
00:22:33.235  "ffdhe6144",
00:22:33.236  "ffdhe8192"
00:22:33.236  ],
00:22:33.236  "dhchap_digests": [
00:22:33.236  "sha256",
00:22:33.236  "sha384",
00:22:33.236  "sha512"
00:22:33.236  ],
00:22:33.236  "disable_auto_failback": false,
00:22:33.236  "fast_io_fail_timeout_sec": 0,
00:22:33.236  "generate_uuids": false,
00:22:33.236  "high_priority_weight": 0,
00:22:33.236  "io_path_stat": false,
00:22:33.236  "io_queue_requests": 0,
00:22:33.236  "keep_alive_timeout_ms": 10000,
00:22:33.236  "low_priority_weight": 0,
00:22:33.236  "medium_priority_weight": 0,
00:22:33.236  "nvme_adminq_poll_period_us": 10000,
00:22:33.236  "nvme_error_stat": false,
00:22:33.236  "nvme_ioq_poll_period_us": 0,
00:22:33.236  "rdma_cm_event_timeout_ms": 0,
00:22:33.236  "rdma_max_cq_size": 0,
00:22:33.236  "rdma_srq_size": 0,
00:22:33.236  "rdma_umr_per_io": false,
00:22:33.236  "reconnect_delay_sec": 0,
00:22:33.236  "timeout_admin_us": 0,
00:22:33.236  "timeout_us": 0,
00:22:33.236  "transport_ack_timeout": 0,
00:22:33.236  "transport_retry_count": 4,
00:22:33.236  "transport_tos": 0
00:22:33.236  }
00:22:33.236  },
00:22:33.236  {
00:22:33.236  "method": "bdev_nvme_set_hotplug",
00:22:33.236  "params": {
00:22:33.236  "enable": false,
00:22:33.236  "period_us": 100000
00:22:33.236  }
00:22:33.236  },
00:22:33.236  {
00:22:33.236  "method": "bdev_malloc_create",
00:22:33.236  "params": {
00:22:33.236  "block_size": 4096,
00:22:33.236  "dif_is_head_of_md": false,
00:22:33.236  "dif_pi_format": 0,
00:22:33.236  "dif_type": 0,
00:22:33.236  "md_size": 0,
00:22:33.236  "name": "malloc0",
00:22:33.236  "num_blocks": 8192,
00:22:33.236  "optimal_io_boundary": 0,
00:22:33.236  "physical_block_size": 4096,
00:22:33.236  "uuid": "4ca38e85-371a-4c5a-a161-d32bd0778c05"
00:22:33.236  }
00:22:33.236  },
00:22:33.236  {
00:22:33.236  "method": "bdev_wait_for_examine"
00:22:33.236  }
00:22:33.236  ]
00:22:33.236  },
00:22:33.236  {
00:22:33.236  "subsystem": "nbd",
00:22:33.236  "config": []
00:22:33.236  },
00:22:33.236  {
00:22:33.236  "subsystem": "scheduler",
00:22:33.236  "config": [
00:22:33.236  {
00:22:33.236  "method": "framework_set_scheduler",
00:22:33.236  "params": {
00:22:33.236  "name": "static"
00:22:33.236  }
00:22:33.236  }
00:22:33.236  ]
00:22:33.236  },
00:22:33.236  {
00:22:33.236  "subsystem": "nvmf",
00:22:33.236  "config": [
00:22:33.236  {
00:22:33.236  "method": "nvmf_set_config",
00:22:33.236  "params": {
00:22:33.236  "admin_cmd_passthru": {
00:22:33.236  "identify_ctrlr": false
00:22:33.236  },
00:22:33.236  "dhchap_dhgroups": [
00:22:33.236  "null",
00:22:33.236  "ffdhe2048",
00:22:33.236  "ffdhe3072",
00:22:33.236  "ffdhe4096",
00:22:33.236  "ffdhe6144",
00:22:33.236  "ffdhe8192"
00:22:33.236  ],
00:22:33.236  "dhchap_digests": [
00:22:33.236  "sha256",
00:22:33.236  "sha384",
00:22:33.236  "sha512"
00:22:33.236  ],
00:22:33.236  "discovery_filter": "match_any"
00:22:33.236  }
00:22:33.236  },
00:22:33.236  {
00:22:33.236  "method": "nvmf_set_max_subsystems",
00:22:33.236  "params": {
00:22:33.236  "max_subsystems": 1024
00:22:33.236  }
00:22:33.236  },
00:22:33.236  {
00:22:33.236  "method": "nvmf_set_crdt",
00:22:33.236  "params": {
00:22:33.236  "crdt1": 0,
00:22:33.236  "crdt2": 0,
00:22:33.236  "crdt3": 0
00:22:33.236  }
00:22:33.236  },
00:22:33.236  {
00:22:33.236  "method": "nvmf_create_transport",
00:22:33.236  "params": {
00:22:33.236  "abort_timeout_sec": 1,
00:22:33.236  "ack_timeout": 0,
00:22:33.236  "buf_cache_size": 4294967295,
00:22:33.236  "c2h_success": false,
00:22:33.236  "data_wr_pool_size": 0,
00:22:33.236  "dif_insert_or_strip": false,
00:22:33.236  "in_capsule_data_size": 4096,
00:22:33.236  "io_unit_size": 131072,
00:22:33.236  "max_aq_depth": 128,
00:22:33.236  "max_io_qpairs_per_ctrlr": 127,
00:22:33.236  "max_io_size": 131072,
00:22:33.236  "max_queue_depth": 128,
00:22:33.236  "num_shared_buffers": 511,
00:22:33.236  "sock_priority": 0,
00:22:33.236  "trtype": "TCP",
00:22:33.236  "zcopy": false
00:22:33.236  }
00:22:33.236  },
00:22:33.236  {
00:22:33.236  "method": "nvmf_create_subsystem",
00:22:33.236  "params": {
00:22:33.236  "allow_any_host": false,
00:22:33.236  "ana_reporting": false,
00:22:33.236  "max_cntlid": 65519,
00:22:33.236  "max_namespaces": 32,
00:22:33.236  "min_cntlid": 1,
00:22:33.236  "model_number": "SPDK bdev Controller",
00:22:33.236  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:22:33.236  "serial_number": "00000000000000000000"
00:22:33.236  }
00:22:33.236  },
00:22:33.236  {
00:22:33.236  "method": "nvmf_subsystem_add_host",
00:22:33.236  "params": {
00:22:33.236  "host": "nqn.2016-06.io.spdk:host1",
00:22:33.236  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:22:33.236  "psk": "key0"
00:22:33.236  }
00:22:33.236  },
00:22:33.236  {
00:22:33.236  "method": "nvmf_subsystem_add_ns",
00:22:33.236  "params": {
00:22:33.236  "namespace": {
00:22:33.236  "bdev_name": "malloc0",
00:22:33.236  "nguid": "4CA38E85371A4C5AA161D32BD0778C05",
00:22:33.236  "no_auto_visible": false,
00:22:33.236  "nsid": 1,
00:22:33.236  "uuid": "4ca38e85-371a-4c5a-a161-d32bd0778c05"
00:22:33.236  },
00:22:33.236  "nqn": "nqn.2016-06.io.spdk:cnode1"
00:22:33.236  }
00:22:33.236  },
00:22:33.236  {
00:22:33.236  "method": "nvmf_subsystem_add_listener",
00:22:33.236  "params": {
00:22:33.236  "listen_address": {
00:22:33.236  "adrfam": "IPv4",
00:22:33.236  "traddr": "10.0.0.3",
00:22:33.236  "trsvcid": "4420",
00:22:33.236  "trtype": "TCP"
00:22:33.236  },
00:22:33.236  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:22:33.236  "secure_channel": false,
00:22:33.236  "sock_impl": "ssl"
00:22:33.236  }
00:22:33.236  }
00:22:33.236  ]
00:22:33.236  }
00:22:33.236  ]
00:22:33.236  }'
00:22:33.236    19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config
00:22:33.494   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{
00:22:33.494    "subsystems": [
00:22:33.494      {
00:22:33.494        "subsystem": "keyring",
00:22:33.494        "config": [
00:22:33.494          {
00:22:33.494            "method": "keyring_file_add_key",
00:22:33.494            "params": {
00:22:33.494              "name": "key0",
00:22:33.494              "path": "/tmp/tmp.JAqPPeMAgY"
00:22:33.494            }
00:22:33.494          }
00:22:33.494        ]
00:22:33.494      },
00:22:33.494      {
00:22:33.494        "subsystem": "iobuf",
00:22:33.494        "config": [
00:22:33.494          {
00:22:33.494            "method": "iobuf_set_options",
00:22:33.494            "params": {
00:22:33.494              "enable_numa": false,
00:22:33.494              "large_bufsize": 135168,
00:22:33.494              "large_pool_count": 1024,
00:22:33.494              "small_bufsize": 8192,
00:22:33.494              "small_pool_count": 8192
00:22:33.494            }
00:22:33.494          }
00:22:33.494        ]
00:22:33.494      },
00:22:33.494      {
00:22:33.494        "subsystem": "sock",
00:22:33.494        "config": [
00:22:33.494          {
00:22:33.494            "method": "sock_set_default_impl",
00:22:33.494            "params": {
00:22:33.494              "impl_name": "posix"
00:22:33.494            }
00:22:33.494          },
00:22:33.494          {
00:22:33.494            "method": "sock_impl_set_options",
00:22:33.494            "params": {
00:22:33.494              "enable_ktls": false,
00:22:33.494              "enable_placement_id": 0,
00:22:33.494              "enable_quickack": false,
00:22:33.494              "enable_recv_pipe": true,
00:22:33.494              "enable_zerocopy_send_client": false,
00:22:33.494              "enable_zerocopy_send_server": true,
00:22:33.494              "impl_name": "ssl",
00:22:33.494              "recv_buf_size": 4096,
00:22:33.494              "send_buf_size": 4096,
00:22:33.494              "tls_version": 0,
00:22:33.494              "zerocopy_threshold": 0
00:22:33.494            }
00:22:33.494          },
00:22:33.494          {
00:22:33.494            "method": "sock_impl_set_options",
00:22:33.494            "params": {
00:22:33.494              "enable_ktls": false,
00:22:33.494              "enable_placement_id": 0,
00:22:33.494              "enable_quickack": false,
00:22:33.494              "enable_recv_pipe": true,
00:22:33.494              "enable_zerocopy_send_client": false,
00:22:33.494              "enable_zerocopy_send_server": true,
00:22:33.494              "impl_name": "posix",
00:22:33.494              "recv_buf_size": 2097152,
00:22:33.494              "send_buf_size": 2097152,
00:22:33.494              "tls_version": 0,
00:22:33.494              "zerocopy_threshold": 0
00:22:33.494            }
00:22:33.494          }
00:22:33.494        ]
00:22:33.494      },
00:22:33.494      {
00:22:33.494        "subsystem": "vmd",
00:22:33.494        "config": []
00:22:33.494      },
00:22:33.494      {
00:22:33.494        "subsystem": "accel",
00:22:33.494        "config": [
00:22:33.494          {
00:22:33.494            "method": "accel_set_options",
00:22:33.494            "params": {
00:22:33.494              "buf_count": 2048,
00:22:33.494              "large_cache_size": 16,
00:22:33.494              "sequence_count": 2048,
00:22:33.494              "small_cache_size": 128,
00:22:33.494              "task_count": 2048
00:22:33.495            }
00:22:33.495          }
00:22:33.495        ]
00:22:33.495      },
00:22:33.495      {
00:22:33.495        "subsystem": "bdev",
00:22:33.495        "config": [
00:22:33.495          {
00:22:33.495            "method": "bdev_set_options",
00:22:33.495            "params": {
00:22:33.495              "bdev_auto_examine": true,
00:22:33.495              "bdev_io_cache_size": 256,
00:22:33.495              "bdev_io_pool_size": 65535,
00:22:33.495              "iobuf_large_cache_size": 16,
00:22:33.495              "iobuf_small_cache_size": 128
00:22:33.495            }
00:22:33.495          },
00:22:33.495          {
00:22:33.495            "method": "bdev_raid_set_options",
00:22:33.495            "params": {
00:22:33.495              "process_max_bandwidth_mb_sec": 0,
00:22:33.495              "process_window_size_kb": 1024
00:22:33.495            }
00:22:33.495          },
00:22:33.495          {
00:22:33.495            "method": "bdev_iscsi_set_options",
00:22:33.495            "params": {
00:22:33.495              "timeout_sec": 30
00:22:33.495            }
00:22:33.495          },
00:22:33.495          {
00:22:33.495            "method": "bdev_nvme_set_options",
00:22:33.495            "params": {
00:22:33.495              "action_on_timeout": "none",
00:22:33.495              "allow_accel_sequence": false,
00:22:33.495              "arbitration_burst": 0,
00:22:33.495              "bdev_retry_count": 3,
00:22:33.495              "ctrlr_loss_timeout_sec": 0,
00:22:33.495              "delay_cmd_submit": true,
00:22:33.495              "dhchap_dhgroups": [
00:22:33.495                "null",
00:22:33.495                "ffdhe2048",
00:22:33.495                "ffdhe3072",
00:22:33.495                "ffdhe4096",
00:22:33.495                "ffdhe6144",
00:22:33.495                "ffdhe8192"
00:22:33.495              ],
00:22:33.495              "dhchap_digests": [
00:22:33.495                "sha256",
00:22:33.495                "sha384",
00:22:33.495                "sha512"
00:22:33.495              ],
00:22:33.495              "disable_auto_failback": false,
00:22:33.495              "fast_io_fail_timeout_sec": 0,
00:22:33.495              "generate_uuids": false,
00:22:33.495              "high_priority_weight": 0,
00:22:33.495              "io_path_stat": false,
00:22:33.495              "io_queue_requests": 512,
00:22:33.495              "keep_alive_timeout_ms": 10000,
00:22:33.495              "low_priority_weight": 0,
00:22:33.495              "medium_priority_weight": 0,
00:22:33.495              "nvme_adminq_poll_period_us": 10000,
00:22:33.495              "nvme_error_stat": false,
00:22:33.495              "nvme_ioq_poll_period_us": 0,
00:22:33.495              "rdma_cm_event_timeout_ms": 0,
00:22:33.495              "rdma_max_cq_size": 0,
00:22:33.495              "rdma_srq_size": 0,
00:22:33.495              "rdma_umr_per_io": false,
00:22:33.495              "reconnect_delay_sec": 0,
00:22:33.495              "timeout_admin_us": 0,
00:22:33.495              "timeout_us": 0,
00:22:33.495              "transport_ack_timeout": 0,
00:22:33.495              "transport_retry_count": 4,
00:22:33.495              "transport_tos": 0
00:22:33.495            }
00:22:33.495          },
00:22:33.495          {
00:22:33.495            "method": "bdev_nvme_attach_controller",
00:22:33.495            "params": {
00:22:33.495              "adrfam": "IPv4",
00:22:33.495              "ctrlr_loss_timeout_sec": 0,
00:22:33.495              "ddgst": false,
00:22:33.495              "fast_io_fail_timeout_sec": 0,
00:22:33.495              "hdgst": false,
00:22:33.495              "hostnqn": "nqn.2016-06.io.spdk:host1",
00:22:33.495              "multipath": "multipath",
00:22:33.495              "name": "nvme0",
00:22:33.495              "prchk_guard": false,
00:22:33.495              "prchk_reftag": false,
00:22:33.495              "psk": "key0",
00:22:33.495              "reconnect_delay_sec": 0,
00:22:33.495              "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:22:33.495              "traddr": "10.0.0.3",
00:22:33.495              "trsvcid": "4420",
00:22:33.495              "trtype": "TCP"
00:22:33.495            }
00:22:33.495          },
00:22:33.495          {
00:22:33.495            "method": "bdev_nvme_set_hotplug",
00:22:33.495            "params": {
00:22:33.495              "enable": false,
00:22:33.495              "period_us": 100000
00:22:33.495            }
00:22:33.495          },
00:22:33.495          {
00:22:33.495            "method": "bdev_enable_histogram",
00:22:33.495            "params": {
00:22:33.495              "enable": true,
00:22:33.495              "name": "nvme0n1"
00:22:33.495            }
00:22:33.495          },
00:22:33.495          {
00:22:33.495            "method": "bdev_wait_for_examine"
00:22:33.495          }
00:22:33.495        ]
00:22:33.495      },
00:22:33.495      {
00:22:33.495        "subsystem": "nbd",
00:22:33.495        "config": []
00:22:33.495      }
00:22:33.495    ]
00:22:33.495  }'
00:22:33.495   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 103220
00:22:33.495   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 103220 ']'
00:22:33.495   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 103220
00:22:33.753    19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:22:33.753   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:33.753    19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103220
00:22:33.753   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:22:33.753   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:22:33.753  killing process with pid 103220
00:22:33.753   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103220'
00:22:33.753   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 103220
00:22:33.753  Received shutdown signal, test time was about 1.000000 seconds
00:22:33.753  
00:22:33.753                                                                                                  Latency(us)
00:22:33.753  
[2024-12-13T19:07:05.577Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:33.753  
[2024-12-13T19:07:05.577Z]  ===================================================================================================================
00:22:33.753  
[2024-12-13T19:07:05.577Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:22:33.753   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 103220
00:22:33.753   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 103184
00:22:33.753   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 103184 ']'
00:22:33.753   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 103184
00:22:33.753    19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:22:33.753   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:33.753    19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103184
00:22:33.753   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:22:33.753   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:22:33.753  killing process with pid 103184
00:22:33.753   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103184'
00:22:33.753   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 103184
00:22:33.753   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 103184
00:22:34.012   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62
00:22:34.012   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:22:34.012    19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{
00:22:34.012  "subsystems": [
00:22:34.012  {
00:22:34.012  "subsystem": "keyring",
00:22:34.012  "config": [
00:22:34.012  {
00:22:34.012  "method": "keyring_file_add_key",
00:22:34.012  "params": {
00:22:34.012  "name": "key0",
00:22:34.012  "path": "/tmp/tmp.JAqPPeMAgY"
00:22:34.012  }
00:22:34.012  }
00:22:34.012  ]
00:22:34.012  },
00:22:34.012  {
00:22:34.012  "subsystem": "iobuf",
00:22:34.012  "config": [
00:22:34.012  {
00:22:34.012  "method": "iobuf_set_options",
00:22:34.012  "params": {
00:22:34.012  "enable_numa": false,
00:22:34.012  "large_bufsize": 135168,
00:22:34.012  "large_pool_count": 1024,
00:22:34.012  "small_bufsize": 8192,
00:22:34.012  "small_pool_count": 8192
00:22:34.012  }
00:22:34.012  }
00:22:34.012  ]
00:22:34.012  },
00:22:34.012  {
00:22:34.012  "subsystem": "sock",
00:22:34.012  "config": [
00:22:34.012  {
00:22:34.012  "method": "sock_set_default_impl",
00:22:34.012  "params": {
00:22:34.012  "impl_name": "posix"
00:22:34.012  }
00:22:34.012  },
00:22:34.012  {
00:22:34.012  "method": "sock_impl_set_options",
00:22:34.012  "params": {
00:22:34.012  "enable_ktls": false,
00:22:34.012  "enable_placement_id": 0,
00:22:34.012  "enable_quickack": false,
00:22:34.012  "enable_recv_pipe": true,
00:22:34.012  "enable_zerocopy_send_client": false,
00:22:34.012  "enable_zerocopy_send_server": true,
00:22:34.012  "impl_name": "ssl",
00:22:34.012  "recv_buf_size": 4096,
00:22:34.012  "send_buf_size": 4096,
00:22:34.012  "tls_version": 0,
00:22:34.012  "zerocopy_threshold": 0
00:22:34.012  }
00:22:34.012  },
00:22:34.012  {
00:22:34.012  "method": "sock_impl_set_options",
00:22:34.012  "params": {
00:22:34.012  "enable_ktls": false,
00:22:34.012  "enable_placement_id": 0,
00:22:34.012  "enable_quickack": false,
00:22:34.012  "enable_recv_pipe": true,
00:22:34.012  "enable_zerocopy_send_client": false,
00:22:34.012  "enable_zerocopy_send_server": true,
00:22:34.012  "impl_name": "posix",
00:22:34.012  "recv_buf_size": 2097152,
00:22:34.012  "send_buf_size": 2097152,
00:22:34.012  "tls_version": 0,
00:22:34.012  "zerocopy_threshold": 0
00:22:34.012  }
00:22:34.012  }
00:22:34.012  ]
00:22:34.012  },
00:22:34.012  {
00:22:34.012  "subsystem": "vmd",
00:22:34.012  "config": []
00:22:34.012  },
00:22:34.012  {
00:22:34.012  "subsystem": "accel",
00:22:34.012  "config": [
00:22:34.012  {
00:22:34.012  "method": "accel_set_options",
00:22:34.012  "params": {
00:22:34.012  "buf_count": 2048,
00:22:34.012  "large_cache_size": 16,
00:22:34.012  "sequence_count": 2048,
00:22:34.012  "small_cache_size": 128,
00:22:34.012  "task_count": 2048
00:22:34.012  }
00:22:34.012  }
00:22:34.012  ]
00:22:34.012  },
00:22:34.012  {
00:22:34.012  "subsystem": "bdev",
00:22:34.012  "config": [
00:22:34.012  {
00:22:34.012  "method": "bdev_set_options",
00:22:34.012  "params": {
00:22:34.012  "bdev_auto_examine": true,
00:22:34.012  "bdev_io_cache_size": 256,
00:22:34.012  "bdev_io_pool_size": 65535,
00:22:34.012  "iobuf_large_cache_size": 16,
00:22:34.012  "iobuf_small_cache_size": 128
00:22:34.012  }
00:22:34.012  },
00:22:34.012  {
00:22:34.012  "method": "bdev_raid_set_options",
00:22:34.012  "params": {
00:22:34.012  "process_max_bandwidth_mb_sec": 0,
00:22:34.012  "process_window_size_kb": 1024
00:22:34.012  }
00:22:34.012  },
00:22:34.012  {
00:22:34.012  "method": "bdev_iscsi_set_options",
00:22:34.012  "params": {
00:22:34.012  "timeout_sec": 30
00:22:34.012  }
00:22:34.012  },
00:22:34.012  {
00:22:34.012  "method": "bdev_nvme_set_options",
00:22:34.012  "params": {
00:22:34.012  "action_on_timeout": "none",
00:22:34.012  "allow_accel_sequence": false,
00:22:34.012  "arbitration_burst": 0,
00:22:34.012  "bdev_retry_count": 3,
00:22:34.012  "ctrlr_loss_timeout_sec": 0,
00:22:34.012  "delay_cmd_submit": true,
00:22:34.013  "dhchap_dhgroups": [
00:22:34.013  "null",
00:22:34.013  "ffdhe2048",
00:22:34.013  "ffdhe3072",
00:22:34.013  "ffdhe4096",
00:22:34.013  "ffdhe6144",
00:22:34.013  "ffdhe8192"
00:22:34.013  ],
00:22:34.013  "dhchap_digests": [
00:22:34.013  "sha256",
00:22:34.013  "sha384",
00:22:34.013  "sha512"
00:22:34.013  ],
00:22:34.013  "disable_auto_failback": false,
00:22:34.013  "fast_io_fail_timeout_sec": 0,
00:22:34.013  "generate_uuids": false,
00:22:34.013  "high_priority_weight": 0,
00:22:34.013  "io_path_stat": false,
00:22:34.013  "io_queue_requests": 0,
00:22:34.013  "keep_alive_timeout_ms": 10000,
00:22:34.013  "low_priority_weight": 0,
00:22:34.013  "medium_priority_weight": 0,
00:22:34.013  "nvme_adminq_poll_period_us": 10000,
00:22:34.013  "nvme_error_stat": false,
00:22:34.013  "nvme_ioq_poll_period_us": 0,
00:22:34.013  "rdma_cm_event_timeout_ms": 0,
00:22:34.013  "rdma_max_cq_size": 0,
00:22:34.013  "rdma_srq_size": 0,
00:22:34.013  "rdma_umr_per_io": false,
00:22:34.013  "reconnect_delay_sec": 0,
00:22:34.013  "timeout_admin_us": 0,
00:22:34.013  "timeout_us": 0,
00:22:34.013  "transport_ack_timeout": 0,
00:22:34.013  "transport_retry_count": 4,
00:22:34.013  "transport_tos": 0
00:22:34.013  }
00:22:34.013  },
00:22:34.013  {
00:22:34.013  "method": "bdev_nvme_set_hotplug",
00:22:34.013  "params": {
00:22:34.013  "enable": false,
00:22:34.013  "period_us": 100000
00:22:34.013  }
00:22:34.013  },
00:22:34.013  {
00:22:34.013  "method": "bdev_malloc_create",
00:22:34.013  "params": {
00:22:34.013  "block_size": 4096,
00:22:34.013  "dif_is_head_of_md": false,
00:22:34.013  "dif_pi_format": 0,
00:22:34.013  "dif_type": 0,
00:22:34.013  "md_size": 0,
00:22:34.013  "name": "malloc0",
00:22:34.013  "num_blocks": 8192,
00:22:34.013  "optimal_io_boundary": 0,
00:22:34.013  "physical_block_size": 4096,
00:22:34.013  "uuid": "4ca38e85-371a-4c5a-a161-d32bd0778c05"
00:22:34.013  }
00:22:34.013  },
00:22:34.013  {
00:22:34.013  "method": "bdev_wait_for_examine"
00:22:34.013  }
00:22:34.013  ]
00:22:34.013  },
00:22:34.013  {
00:22:34.013  "subsystem": "nbd",
00:22:34.013  "config": []
00:22:34.013  },
00:22:34.013  {
00:22:34.013  "subsystem": "scheduler",
00:22:34.013  "config": [
00:22:34.013  {
00:22:34.013  "method": "framework_set_scheduler",
00:22:34.013  "params": {
00:22:34.013  "name": "static"
00:22:34.013  }
00:22:34.013  }
00:22:34.013  ]
00:22:34.013  },
00:22:34.013  {
00:22:34.013  "subsystem": "nvmf",
00:22:34.013  "config": [
00:22:34.013  {
00:22:34.013  "method": "nvmf_set_config",
00:22:34.013  "params": {
00:22:34.013  "admin_cmd_passthru": {
00:22:34.013  "identify_ctrlr": false
00:22:34.013  },
00:22:34.013  "dhchap_dhgroups": [
00:22:34.013  "null",
00:22:34.013  "ffdhe2048",
00:22:34.013  "ffdhe3072",
00:22:34.013  "ffdhe4096",
00:22:34.013  "ffdhe6144",
00:22:34.013  "ffdhe8192"
00:22:34.013  ],
00:22:34.013  "dhchap_digests": [
00:22:34.013  "sha256",
00:22:34.013  "sha384",
00:22:34.013  "sha512"
00:22:34.013  ],
00:22:34.013  "discovery_filter": "match_any"
00:22:34.013  }
00:22:34.013  },
00:22:34.013  {
00:22:34.013  "method": "nvmf_set_max_subsystems",
00:22:34.013  "params": {
00:22:34.013  "max_subsystems": 1024
00:22:34.013  }
00:22:34.013  },
00:22:34.013  {
00:22:34.013  "method": "nvmf_set_crdt",
00:22:34.013  "params": {
00:22:34.013  "crdt1": 0,
00:22:34.013  "crdt2": 0,
00:22:34.013  "crdt3": 0
00:22:34.013  }
00:22:34.013  },
00:22:34.013  {
00:22:34.013  "method": "nvmf_create_transport",
00:22:34.013  "params": {
00:22:34.013  "abort_timeout_sec": 1,
00:22:34.013  "ack_timeout": 0,
00:22:34.013  "buf_cache_size": 4294967295,
00:22:34.013  "c2h_success": false,
00:22:34.013  "data_wr_pool_size": 0,
00:22:34.013  "dif_insert_or_strip": fals 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:22:34.013  e,
00:22:34.013  "in_capsule_data_size": 4096,
00:22:34.013  "io_unit_size": 131072,
00:22:34.013  "max_aq_depth": 128,
00:22:34.013  "max_io_qpairs_per_ctrlr": 127,
00:22:34.013  "max_io_size": 131072,
00:22:34.013  "max_queue_depth": 128,
00:22:34.013  "num_shared_buffers": 511,
00:22:34.013  "sock_priority": 0,
00:22:34.013  "trtype": "TCP",
00:22:34.013  "zcopy": false
00:22:34.013  }
00:22:34.013  },
00:22:34.013  {
00:22:34.013  "method": "nvmf_create_subsystem",
00:22:34.013  "params": {
00:22:34.013  "allow_any_host": false,
00:22:34.013  "ana_reporting": false,
00:22:34.013  "max_cntlid": 65519,
00:22:34.013  "max_namespaces": 32,
00:22:34.013  "min_cntlid": 1,
00:22:34.013  "model_number": "SPDK bdev Controller",
00:22:34.013  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:22:34.013  "serial_number": "00000000000000000000"
00:22:34.013  }
00:22:34.013  },
00:22:34.013  {
00:22:34.013  "method": "nvmf_subsystem_add_host",
00:22:34.013  "params": {
00:22:34.013  "host": "nqn.2016-06.io.spdk:host1",
00:22:34.013  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:22:34.013  "psk": "key0"
00:22:34.013  }
00:22:34.013  },
00:22:34.013  {
00:22:34.013  "method": "nvmf_subsystem_add_ns",
00:22:34.013  "params": {
00:22:34.013  "namespace": {
00:22:34.013  "bdev_name": "malloc0",
00:22:34.013  "nguid": "4CA38E85371A4C5AA161D32BD0778C05",
00:22:34.013  "no_auto_visible": false,
00:22:34.013  "nsid": 1,
00:22:34.013  "uuid": "4ca38e85-371a-4c5a-a161-d32bd0778c05"
00:22:34.013  },
00:22:34.013  "nqn": "nqn.2016-06.io.spdk:cnode1"
00:22:34.013  }
00:22:34.013  },
00:22:34.013  {
00:22:34.013  "method": "nvmf_subsystem_add_listener",
00:22:34.013  "params": {
00:22:34.013  "listen_address": {
00:22:34.013  "adrfam": "IPv4",
00:22:34.013  "traddr": "10.0.0.3",
00:22:34.013  "trsvcid": "4420",
00:22:34.013  "trtype": "TCP"
00:22:34.013  },
00:22:34.013  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:22:34.013  "secure_channel": false,
00:22:34.013  "sock_impl": "ssl"
00:22:34.013  }
00:22:34.013  }
00:22:34.013  ]
00:22:34.013  }
00:22:34.013  ]
00:22:34.013  }'
00:22:34.013   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:34.013   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=103293
00:22:34.013   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62
00:22:34.013   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 103293
00:22:34.013   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 103293 ']'
00:22:34.013   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:34.013   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:34.013  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:22:34.013   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:34.013   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:34.013   19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:34.013  [2024-12-13 19:07:05.805001] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:22:34.013  [2024-12-13 19:07:05.805085] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:22:34.272  [2024-12-13 19:07:05.947589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:34.272  [2024-12-13 19:07:05.987241] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:22:34.272  [2024-12-13 19:07:05.987293] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:22:34.272  [2024-12-13 19:07:05.987319] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:22:34.272  [2024-12-13 19:07:05.987326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:22:34.272  [2024-12-13 19:07:05.987333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:22:34.272  [2024-12-13 19:07:05.987766] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:22:34.531  [2024-12-13 19:07:06.224690] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:22:34.531  [2024-12-13 19:07:06.256646] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:22:34.531  [2024-12-13 19:07:06.256921] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:22:35.100   19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:35.100   19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:22:35.100   19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:22:35.100   19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:22:35.100   19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:35.100   19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:22:35.100   19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=103337
00:22:35.100   19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 103337 /var/tmp/bdevperf.sock
00:22:35.100   19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 103337 ']'
00:22:35.100   19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:22:35.100   19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:35.100  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:22:35.100   19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:22:35.100   19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:35.100   19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:35.100   19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63
00:22:35.100    19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{
00:22:35.100    "subsystems": [
00:22:35.100      {
00:22:35.100        "subsystem": "keyring",
00:22:35.100        "config": [
00:22:35.100          {
00:22:35.100            "method": "keyring_file_add_key",
00:22:35.100            "params": {
00:22:35.100              "name": "key0",
00:22:35.100              "path": "/tmp/tmp.JAqPPeMAgY"
00:22:35.100            }
00:22:35.100          }
00:22:35.100        ]
00:22:35.100      },
00:22:35.100      {
00:22:35.100        "subsystem": "iobuf",
00:22:35.100        "config": [
00:22:35.100          {
00:22:35.100            "method": "iobuf_set_options",
00:22:35.100            "params": {
00:22:35.100              "enable_numa": false,
00:22:35.100              "large_bufsize": 135168,
00:22:35.100              "large_pool_count": 1024,
00:22:35.100              "small_bufsize": 8192,
00:22:35.100              "small_pool_count": 8192
00:22:35.100            }
00:22:35.100          }
00:22:35.100        ]
00:22:35.100      },
00:22:35.100      {
00:22:35.100        "subsystem": "sock",
00:22:35.100        "config": [
00:22:35.100          {
00:22:35.100            "method": "sock_set_default_impl",
00:22:35.100            "params": {
00:22:35.100              "impl_name": "posix"
00:22:35.100            }
00:22:35.100          },
00:22:35.100          {
00:22:35.100            "method": "sock_impl_set_options",
00:22:35.100            "params": {
00:22:35.100              "enable_ktls": false,
00:22:35.100              "enable_placement_id": 0,
00:22:35.100              "enable_quickack": false,
00:22:35.100              "enable_recv_pipe": true,
00:22:35.100              "enable_zerocopy_send_client": false,
00:22:35.100              "enable_zerocopy_send_server": true,
00:22:35.100              "impl_name": "ssl",
00:22:35.100              "recv_buf_size": 4096,
00:22:35.100              "send_buf_size": 4096,
00:22:35.100              "tls_version": 0,
00:22:35.100              "zerocopy_threshold": 0
00:22:35.100            }
00:22:35.100          },
00:22:35.100          {
00:22:35.100            "method": "sock_impl_set_options",
00:22:35.100            "params": {
00:22:35.100              "enable_ktls": false,
00:22:35.100              "enable_placement_id": 0,
00:22:35.100              "enable_quickack": false,
00:22:35.100              "enable_recv_pipe": true,
00:22:35.100              "enable_zerocopy_send_client": false,
00:22:35.100              "enable_zerocopy_send_server": true,
00:22:35.100              "impl_name": "posix",
00:22:35.100              "recv_buf_size": 2097152,
00:22:35.100              "send_buf_size": 2097152,
00:22:35.100              "tls_version": 0,
00:22:35.100              "zerocopy_threshold": 0
00:22:35.100            }
00:22:35.100          }
00:22:35.100        ]
00:22:35.100      },
00:22:35.100      {
00:22:35.100        "subsystem": "vmd",
00:22:35.100        "config": []
00:22:35.100      },
00:22:35.100      {
00:22:35.100        "subsystem": "accel",
00:22:35.100        "config": [
00:22:35.100          {
00:22:35.100            "method": "accel_set_options",
00:22:35.100            "params": {
00:22:35.100              "buf_count": 2048,
00:22:35.100              "large_cache_size": 16,
00:22:35.100              "sequence_count": 2048,
00:22:35.100              "small_cache_size": 128,
00:22:35.100              "task_count": 2048
00:22:35.100            }
00:22:35.100          }
00:22:35.100        ]
00:22:35.100      },
00:22:35.100      {
00:22:35.100        "subsystem": "bdev",
00:22:35.100        "config": [
00:22:35.100          {
00:22:35.100            "method": "bdev_set_options",
00:22:35.100            "params": {
00:22:35.100              "bdev_auto_examine": true,
00:22:35.100              "bdev_io_cache_size": 256,
00:22:35.100              "bdev_io_pool_size": 65535,
00:22:35.100              "iobuf_large_cache_size": 16,
00:22:35.100              "iobuf_small_cache_size": 128
00:22:35.100            }
00:22:35.100          },
00:22:35.100          {
00:22:35.100            "method": "bdev_raid_set_options",
00:22:35.100            "params": {
00:22:35.100              "process_max_bandwidth_mb_sec": 0,
00:22:35.100              "process_window_size_kb": 1024
00:22:35.100            }
00:22:35.100          },
00:22:35.100          {
00:22:35.100            "method": "bdev_iscsi_set_options",
00:22:35.100            "params": {
00:22:35.100              "timeout_sec": 30
00:22:35.100            }
00:22:35.100          },
00:22:35.100          {
00:22:35.100            "method": "bdev_nvme_set_options",
00:22:35.100            "params": {
00:22:35.100              "action_on_timeout": "none",
00:22:35.100              "allow_accel_sequence": false,
00:22:35.100              "arbitration_burst": 0,
00:22:35.100              "bdev_retry_count": 3,
00:22:35.100              "ctrlr_loss_timeout_sec": 0,
00:22:35.100              "delay_cmd_submit": true,
00:22:35.100              "dhchap_dhgroups": [
00:22:35.100                "null",
00:22:35.100                "ffdhe2048",
00:22:35.100                "ffdhe3072",
00:22:35.100                "ffdhe4096",
00:22:35.100                "ffdhe6144",
00:22:35.100                "ffdhe8192"
00:22:35.100              ],
00:22:35.100              "dhchap_digests": [
00:22:35.100                "sha256",
00:22:35.100                "sha384",
00:22:35.100                "sha512"
00:22:35.100              ],
00:22:35.100              "disable_auto_failback": false,
00:22:35.100              "fast_io_fail_timeout_sec": 0,
00:22:35.100              "generate_uuids": false,
00:22:35.100              "high_priority_weight": 0,
00:22:35.100              "io_path_stat": false,
00:22:35.100              "io_queue_requests": 512,
00:22:35.100              "keep_alive_timeout_ms": 10000,
00:22:35.100              "low_priority_weight": 0,
00:22:35.100              "medium_priority_weight": 0,
00:22:35.100              "nvme_adminq_poll_period_us": 10000,
00:22:35.100              "nvme_error_stat": false,
00:22:35.100              "nvme_ioq_poll_period_us": 0,
00:22:35.100              "rdma_cm_event_timeout_ms": 0,
00:22:35.100              "rdma_max_cq_size": 0,
00:22:35.100              "rdma_srq_size": 0,
00:22:35.100              "rdma_umr_per_io": false,
00:22:35.100              "reconnect_delay_sec": 0,
00:22:35.100              "timeout_admin_us": 0,
00:22:35.100              "timeout_us": 0,
00:22:35.100              "transport_ack_timeout": 0,
00:22:35.100              "transport_retry_count": 4,
00:22:35.100              "transport_tos": 0
00:22:35.100            }
00:22:35.100          },
00:22:35.100          {
00:22:35.100            "method": "bdev_nvme_attach_controller",
00:22:35.100            "params": {
00:22:35.100              "adrfam": "IPv4",
00:22:35.100              "ctrlr_loss_timeout_sec": 0,
00:22:35.100              "ddgst": false,
00:22:35.100              "fast_io_fail_timeout_sec": 0,
00:22:35.100              "hdgst": false,
00:22:35.100              "hostnqn": "nqn.2016-06.io.spdk:host1",
00:22:35.100              "multipath": "multipath",
00:22:35.100              "name": "nvme0",
00:22:35.100              "prchk_guard": false,
00:22:35.100              "prchk_reftag": false,
00:22:35.100              "psk": "key0",
00:22:35.100              "reconnect_delay_sec": 0,
00:22:35.100              "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:22:35.100              "traddr": "10.0.0.3",
00:22:35.100              "trsvcid": "4420",
00:22:35.100              "trtype": "TCP"
00:22:35.100            }
00:22:35.100          },
00:22:35.100          {
00:22:35.100            "method": "bdev_nvme_set_hotplug",
00:22:35.100            "params": {
00:22:35.100              "enable": false,
00:22:35.100              "period_us": 100000
00:22:35.100            }
00:22:35.100          },
00:22:35.101          {
00:22:35.101            "method": "bdev_enable_histogram",
00:22:35.101            "params": {
00:22:35.101              "enable": true,
00:22:35.101              "name": "nvme0n1"
00:22:35.101            }
00:22:35.101          },
00:22:35.101          {
00:22:35.101            "method": "bdev_wait_for_examine"
00:22:35.101          }
00:22:35.101        ]
00:22:35.101      },
00:22:35.101      {
00:22:35.101        "subsystem": "nbd",
00:22:35.101        "config": []
00:22:35.101      }
00:22:35.101    ]
00:22:35.101  }'
00:22:35.101  [2024-12-13 19:07:06.857315] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:22:35.101  [2024-12-13 19:07:06.857407] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103337 ]
00:22:35.360  [2024-12-13 19:07:06.998418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:35.360  [2024-12-13 19:07:07.041405] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:22:35.618  [2024-12-13 19:07:07.213765] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:22:36.184   19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:36.184   19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:22:36.184    19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:22:36.184    19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name'
00:22:36.441   19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:36.441   19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:22:36.441  Running I/O for 1 seconds...
00:22:37.818       4864.00 IOPS,    19.00 MiB/s
00:22:37.818                                                                                                  Latency(us)
00:22:37.818  
[2024-12-13T19:07:09.642Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:37.818  Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:22:37.818  	 Verification LBA range: start 0x0 length 0x2000
00:22:37.818  	 nvme0n1             :       1.02    4888.90      19.10       0.00     0.00   25939.31    6196.13   17396.83
00:22:37.818  
[2024-12-13T19:07:09.642Z]  ===================================================================================================================
00:22:37.818  
[2024-12-13T19:07:09.642Z]  Total                       :               4888.90      19.10       0.00     0.00   25939.31    6196.13   17396.83
00:22:37.818  {
00:22:37.818    "results": [
00:22:37.818      {
00:22:37.818        "job": "nvme0n1",
00:22:37.818        "core_mask": "0x2",
00:22:37.818        "workload": "verify",
00:22:37.818        "status": "finished",
00:22:37.818        "verify_range": {
00:22:37.818          "start": 0,
00:22:37.818          "length": 8192
00:22:37.818        },
00:22:37.818        "queue_depth": 128,
00:22:37.818        "io_size": 4096,
00:22:37.818        "runtime": 1.021089,
00:22:37.818        "iops": 4888.8980294567855,
00:22:37.818        "mibps": 19.09725792756557,
00:22:37.818        "io_failed": 0,
00:22:37.818        "io_timeout": 0,
00:22:37.818        "avg_latency_us": 25939.30741258741,
00:22:37.818        "min_latency_us": 6196.130909090909,
00:22:37.818        "max_latency_us": 17396.82909090909
00:22:37.818      }
00:22:37.818    ],
00:22:37.818    "core_count": 1
00:22:37.818  }
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']'
00:22:37.818    19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n'
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]]
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0
00:22:37.818  nvmf_trace.0
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 103337
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 103337 ']'
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 103337
00:22:37.818    19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:37.818    19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103337
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:22:37.818  killing process with pid 103337
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103337'
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 103337
00:22:37.818  Received shutdown signal, test time was about 1.000000 seconds
00:22:37.818  
00:22:37.818                                                                                                  Latency(us)
00:22:37.818  
[2024-12-13T19:07:09.642Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:37.818  
[2024-12-13T19:07:09.642Z]  ===================================================================================================================
00:22:37.818  
[2024-12-13T19:07:09.642Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 103337
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20}
00:22:37.818   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:22:37.818  rmmod nvme_tcp
00:22:37.818  rmmod nvme_fabrics
00:22:38.078  rmmod nvme_keyring
00:22:38.078   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:22:38.078   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e
00:22:38.078   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0
00:22:38.078   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 103293 ']'
00:22:38.078   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 103293
00:22:38.078   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 103293 ']'
00:22:38.078   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 103293
00:22:38.078    19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:22:38.078   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:38.078    19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103293
00:22:38.078   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:22:38.078   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:22:38.078  killing process with pid 103293
00:22:38.078   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103293'
00:22:38.078   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 103293
00:22:38.078   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 103293
00:22:38.078   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:22:38.078   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:22:38.078   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:22:38.078   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr
00:22:38.078   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save
00:22:38.078   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:22:38.078   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore
00:22:38.078   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:22:38.078   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:22:38.078   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:22:38.337   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:22:38.337   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:22:38.337   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:22:38.337   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:22:38.337   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:22:38.337   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:22:38.337   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:22:38.337   19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:22:38.337   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:22:38.337   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:22:38.337   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:22:38.337   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:22:38.337   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns
00:22:38.337   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:38.337   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:22:38.337    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:38.337   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0
00:22:38.337   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.yda1RV7tUn /tmp/tmp.PUcrIFzmCb /tmp/tmp.JAqPPeMAgY
00:22:38.337  
00:22:38.337  real	1m23.142s
00:22:38.337  user	2m14.388s
00:22:38.337  sys	0m27.728s
00:22:38.337   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable
00:22:38.337   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:38.337  ************************************
00:22:38.337  END TEST nvmf_tls
00:22:38.337  ************************************
00:22:38.597   19:07:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp
00:22:38.597   19:07:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:22:38.597   19:07:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:22:38.597   19:07:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:22:38.597  ************************************
00:22:38.597  START TEST nvmf_fips
00:22:38.597  ************************************
00:22:38.597   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp
00:22:38.597  * Looking for test storage...
00:22:38.597  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:22:38.597     19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version
00:22:38.597     19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-:
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-:
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<'
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 ))
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:22:38.597     19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1
00:22:38.597     19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1
00:22:38.597     19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:22:38.597     19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1
00:22:38.597     19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2
00:22:38.597     19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2
00:22:38.597     19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:22:38.597     19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:22:38.597  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:38.597  		--rc genhtml_branch_coverage=1
00:22:38.597  		--rc genhtml_function_coverage=1
00:22:38.597  		--rc genhtml_legend=1
00:22:38.597  		--rc geninfo_all_blocks=1
00:22:38.597  		--rc geninfo_unexecuted_blocks=1
00:22:38.597  		
00:22:38.597  		'
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:22:38.597  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:38.597  		--rc genhtml_branch_coverage=1
00:22:38.597  		--rc genhtml_function_coverage=1
00:22:38.597  		--rc genhtml_legend=1
00:22:38.597  		--rc geninfo_all_blocks=1
00:22:38.597  		--rc geninfo_unexecuted_blocks=1
00:22:38.597  		
00:22:38.597  		'
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:22:38.597  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:38.597  		--rc genhtml_branch_coverage=1
00:22:38.597  		--rc genhtml_function_coverage=1
00:22:38.597  		--rc genhtml_legend=1
00:22:38.597  		--rc geninfo_all_blocks=1
00:22:38.597  		--rc geninfo_unexecuted_blocks=1
00:22:38.597  		
00:22:38.597  		'
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:22:38.597  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:38.597  		--rc genhtml_branch_coverage=1
00:22:38.597  		--rc genhtml_function_coverage=1
00:22:38.597  		--rc genhtml_legend=1
00:22:38.597  		--rc geninfo_all_blocks=1
00:22:38.597  		--rc geninfo_unexecuted_blocks=1
00:22:38.597  		
00:22:38.597  		'
00:22:38.597   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:22:38.597     19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:22:38.597     19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:22:38.597    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:22:38.597     19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob
00:22:38.597     19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:22:38.597     19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:22:38.597     19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:22:38.597      19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:38.597      19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:38.597      19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:38.597      19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH
00:22:38.598      19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:22:38.598  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}'
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-:
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-:
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>='
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 ))
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]]
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]]
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ ))
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]]
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:22:38.598   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0
00:22:38.598    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir
00:22:38.857   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]]
00:22:38.857    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help
00:22:38.857   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode'
00:22:38.857   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]]
00:22:38.857   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config
00:22:38.857   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config
00:22:38.857   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config
00:22:38.857   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat
00:22:38.857   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]]
00:22:38.857   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat -
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers
00:22:38.858    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers
00:22:38.858    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 ))
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[     name: openssl base provider != *base* ]]
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[     name: red hat enterprise linux 9 - openssl fips provider != *fips* ]]
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62
00:22:38.858    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # :
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:38.858    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:38.858    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]]
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62
00:22:38.858  Error setting digest
00:22:38.858  4022EE2B787F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties ()
00:22:38.858  4022EE2B787F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272:
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:22:38.858    19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:22:38.858  Cannot find device "nvmf_init_br"
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:22:38.858  Cannot find device "nvmf_init_br2"
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:22:38.858  Cannot find device "nvmf_tgt_br"
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:22:38.858  Cannot find device "nvmf_tgt_br2"
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:22:38.858  Cannot find device "nvmf_init_br"
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:22:38.858  Cannot find device "nvmf_init_br2"
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:22:38.858  Cannot find device "nvmf_tgt_br"
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:22:38.858  Cannot find device "nvmf_tgt_br2"
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:22:38.858  Cannot find device "nvmf_br"
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:22:38.858  Cannot find device "nvmf_init_if"
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:22:38.858  Cannot find device "nvmf_init_if2"
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:22:38.858  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true
00:22:38.858   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:22:38.858  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:22:39.118  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:22:39.118  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms
00:22:39.118  
00:22:39.118  --- 10.0.0.3 ping statistics ---
00:22:39.118  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:39.118  rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:22:39.118  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:22:39.118  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms
00:22:39.118  
00:22:39.118  --- 10.0.0.4 ping statistics ---
00:22:39.118  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:39.118  rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:22:39.118  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:22:39.118  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms
00:22:39.118  
00:22:39.118  --- 10.0.0.1 ping statistics ---
00:22:39.118  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:39.118  rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:22:39.118  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:22:39.118  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms
00:22:39.118  
00:22:39.118  --- 10.0.0.2 ping statistics ---
00:22:39.118  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:39.118  rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x
00:22:39.118  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=103674
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 103674
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 103674 ']'
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:39.118   19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x
00:22:39.377  [2024-12-13 19:07:10.989817] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:22:39.377  [2024-12-13 19:07:10.989880] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:22:39.377  [2024-12-13 19:07:11.140855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:39.377  [2024-12-13 19:07:11.178967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:22:39.377  [2024-12-13 19:07:11.179036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:22:39.377  [2024-12-13 19:07:11.179051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:22:39.377  [2024-12-13 19:07:11.179062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:22:39.377  [2024-12-13 19:07:11.179071] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:22:39.377  [2024-12-13 19:07:11.179535] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:22:39.636   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:39.636   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0
00:22:39.636   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:22:39.636   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable
00:22:39.636   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x
00:22:39.636   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:22:39.636   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT
00:22:39.636   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ:
00:22:39.636    19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX
00:22:39.636   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.F4n
00:22:39.636   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ:
00:22:39.636   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.F4n
00:22:39.636   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.F4n
00:22:39.636   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.F4n
00:22:39.636   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:22:39.895  [2024-12-13 19:07:11.657263] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:22:39.895  [2024-12-13 19:07:11.673210] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:22:39.895  [2024-12-13 19:07:11.673513] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:22:40.154  malloc0
00:22:40.154   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:22:40.154   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=103716
00:22:40.154   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:22:40.154   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 103716 /var/tmp/bdevperf.sock
00:22:40.154   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 103716 ']'
00:22:40.154   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:22:40.154   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:40.154  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:22:40.154   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:22:40.154   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:40.154   19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x
00:22:40.154  [2024-12-13 19:07:11.832683] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:22:40.154  [2024-12-13 19:07:11.832782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103716 ]
00:22:40.412  [2024-12-13 19:07:11.988400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:40.412  [2024-12-13 19:07:12.027101] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:22:40.412   19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:40.412   19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0
00:22:40.412   19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.F4n
00:22:40.671   19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0
00:22:40.930  [2024-12-13 19:07:12.724269] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:22:41.189  TLSTESTn1
00:22:41.189   19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:22:41.189  Running I/O for 10 seconds...
00:22:43.501       4668.00 IOPS,    18.23 MiB/s
[2024-12-13T19:07:16.262Z]      4692.00 IOPS,    18.33 MiB/s
[2024-12-13T19:07:17.198Z]      4750.00 IOPS,    18.55 MiB/s
[2024-12-13T19:07:18.135Z]      4797.75 IOPS,    18.74 MiB/s
[2024-12-13T19:07:19.071Z]      4825.00 IOPS,    18.85 MiB/s
[2024-12-13T19:07:20.007Z]      4821.83 IOPS,    18.84 MiB/s
[2024-12-13T19:07:20.943Z]      4823.71 IOPS,    18.84 MiB/s
[2024-12-13T19:07:22.320Z]      4818.75 IOPS,    18.82 MiB/s
[2024-12-13T19:07:23.257Z]      4834.67 IOPS,    18.89 MiB/s
[2024-12-13T19:07:23.257Z]      4842.10 IOPS,    18.91 MiB/s
00:22:51.433                                                                                                  Latency(us)
00:22:51.433  
[2024-12-13T19:07:23.257Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:51.433  Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:22:51.433  	 Verification LBA range: start 0x0 length 0x2000
00:22:51.433  	 TLSTESTn1           :      10.01    4848.16      18.94       0.00     0.00   26357.19    5093.93   21090.68
00:22:51.433  
[2024-12-13T19:07:23.257Z]  ===================================================================================================================
00:22:51.433  
[2024-12-13T19:07:23.257Z]  Total                       :               4848.16      18.94       0.00     0.00   26357.19    5093.93   21090.68
00:22:51.433  {
00:22:51.433    "results": [
00:22:51.433      {
00:22:51.433        "job": "TLSTESTn1",
00:22:51.433        "core_mask": "0x4",
00:22:51.433        "workload": "verify",
00:22:51.433        "status": "finished",
00:22:51.433        "verify_range": {
00:22:51.433          "start": 0,
00:22:51.433          "length": 8192
00:22:51.433        },
00:22:51.433        "queue_depth": 128,
00:22:51.433        "io_size": 4096,
00:22:51.433        "runtime": 10.013695,
00:22:51.433        "iops": 4848.16044427157,
00:22:51.433        "mibps": 18.93812673543582,
00:22:51.433        "io_failed": 0,
00:22:51.433        "io_timeout": 0,
00:22:51.433        "avg_latency_us": 26357.19134547252,
00:22:51.433        "min_latency_us": 5093.9345454545455,
00:22:51.433        "max_latency_us": 21090.676363636365
00:22:51.433      }
00:22:51.433    ],
00:22:51.433    "core_count": 1
00:22:51.433  }
00:22:51.433   19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup
00:22:51.433   19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0
00:22:51.433   19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id
00:22:51.433   19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0
00:22:51.433   19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']'
00:22:51.433    19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n'
00:22:51.433   19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0
00:22:51.433   19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]]
00:22:51.433   19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files
00:22:51.433   19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0
00:22:51.433  nvmf_trace.0
00:22:51.433   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0
00:22:51.433   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 103716
00:22:51.433   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 103716 ']'
00:22:51.433   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 103716
00:22:51.433    19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname
00:22:51.433   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:51.433    19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103716
00:22:51.433  killing process with pid 103716
00:22:51.433  Received shutdown signal, test time was about 10.000000 seconds
00:22:51.433  
00:22:51.433                                                                                                  Latency(us)
00:22:51.433  
[2024-12-13T19:07:23.257Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:51.433  
[2024-12-13T19:07:23.257Z]  ===================================================================================================================
00:22:51.433  
[2024-12-13T19:07:23.257Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:22:51.433   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:22:51.433   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:22:51.433   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103716'
00:22:51.433   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 103716
00:22:51.433   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 103716
00:22:51.692   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini
00:22:51.692   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup
00:22:51.692   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync
00:22:51.692   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:22:51.692   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e
00:22:51.692   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20}
00:22:51.692   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:22:51.692  rmmod nvme_tcp
00:22:51.692  rmmod nvme_fabrics
00:22:51.692  rmmod nvme_keyring
00:22:51.692   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:22:51.692   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e
00:22:51.692   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0
00:22:51.692   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 103674 ']'
00:22:51.692   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 103674
00:22:51.692   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 103674 ']'
00:22:51.692   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 103674
00:22:51.692    19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname
00:22:51.692   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:51.692    19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103674
00:22:51.692  killing process with pid 103674
00:22:51.692   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:22:51.692   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:22:51.692   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103674'
00:22:51.692   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 103674
00:22:51.692   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 103674
00:22:51.951   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:22:51.951   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:22:51.951   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:22:51.951   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr
00:22:51.951   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:22:51.951   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save
00:22:51.951   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore
00:22:51.951   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:22:51.951   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:22:51.951   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:22:51.951   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:22:51.951   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:22:51.951   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:22:51.951   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:22:51.951   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:22:51.951   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:22:51.951   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:22:51.951   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:22:51.951   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:22:51.952   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:22:52.211   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:22:52.211   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:22:52.211   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns
00:22:52.211   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:52.211   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:22:52.211    19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:52.211   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0
00:22:52.211   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.F4n
00:22:52.211  
00:22:52.211  real	0m13.700s
00:22:52.211  user	0m18.803s
00:22:52.211  sys	0m5.725s
00:22:52.211  ************************************
00:22:52.211  END TEST nvmf_fips
00:22:52.211  ************************************
00:22:52.211   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable
00:22:52.211   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x
00:22:52.211   19:07:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp
00:22:52.211   19:07:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:22:52.211   19:07:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:22:52.211   19:07:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:22:52.211  ************************************
00:22:52.211  START TEST nvmf_control_msg_list
00:22:52.211  ************************************
00:22:52.211   19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp
00:22:52.211  * Looking for test storage...
00:22:52.211  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:22:52.211    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:22:52.211     19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version
00:22:52.211     19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-:
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-:
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<'
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 ))
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:22:52.471     19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1
00:22:52.471     19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1
00:22:52.471     19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:22:52.471     19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1
00:22:52.471     19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2
00:22:52.471     19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2
00:22:52.471     19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:22:52.471     19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:22:52.471  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:52.471  		--rc genhtml_branch_coverage=1
00:22:52.471  		--rc genhtml_function_coverage=1
00:22:52.471  		--rc genhtml_legend=1
00:22:52.471  		--rc geninfo_all_blocks=1
00:22:52.471  		--rc geninfo_unexecuted_blocks=1
00:22:52.471  		
00:22:52.471  		'
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:22:52.471  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:52.471  		--rc genhtml_branch_coverage=1
00:22:52.471  		--rc genhtml_function_coverage=1
00:22:52.471  		--rc genhtml_legend=1
00:22:52.471  		--rc geninfo_all_blocks=1
00:22:52.471  		--rc geninfo_unexecuted_blocks=1
00:22:52.471  		
00:22:52.471  		'
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:22:52.471  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:52.471  		--rc genhtml_branch_coverage=1
00:22:52.471  		--rc genhtml_function_coverage=1
00:22:52.471  		--rc genhtml_legend=1
00:22:52.471  		--rc geninfo_all_blocks=1
00:22:52.471  		--rc geninfo_unexecuted_blocks=1
00:22:52.471  		
00:22:52.471  		'
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:22:52.471  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:52.471  		--rc genhtml_branch_coverage=1
00:22:52.471  		--rc genhtml_function_coverage=1
00:22:52.471  		--rc genhtml_legend=1
00:22:52.471  		--rc geninfo_all_blocks=1
00:22:52.471  		--rc geninfo_unexecuted_blocks=1
00:22:52.471  		
00:22:52.471  		'
00:22:52.471   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:22:52.471     19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:22:52.471     19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:22:52.471    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:22:52.471     19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob
00:22:52.471     19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:22:52.471     19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:22:52.471     19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:22:52.472      19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:52.472      19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:52.472      19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:52.472      19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH
00:22:52.472      19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:52.472    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0
00:22:52.472    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:22:52.472    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:22:52.472    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:22:52.472    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:22:52.472    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:22:52.472    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:22:52.472  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:22:52.472    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:22:52.472    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:22:52.472    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:22:52.472    19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:22:52.472  Cannot find device "nvmf_init_br"
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:22:52.472  Cannot find device "nvmf_init_br2"
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:22:52.472  Cannot find device "nvmf_tgt_br"
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:22:52.472  Cannot find device "nvmf_tgt_br2"
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:22:52.472  Cannot find device "nvmf_init_br"
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:22:52.472  Cannot find device "nvmf_init_br2"
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:22:52.472  Cannot find device "nvmf_tgt_br"
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:22:52.472  Cannot find device "nvmf_tgt_br2"
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:22:52.472  Cannot find device "nvmf_br"
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:22:52.472  Cannot find device "nvmf_init_if"
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:22:52.472  Cannot find device "nvmf_init_if2"
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:22:52.472  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true
00:22:52.472   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:22:52.732  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:22:52.732  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:22:52.732  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms
00:22:52.732  
00:22:52.732  --- 10.0.0.3 ping statistics ---
00:22:52.732  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:52.732  rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:22:52.732  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:22:52.732  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms
00:22:52.732  
00:22:52.732  --- 10.0.0.4 ping statistics ---
00:22:52.732  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:52.732  rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:22:52.732  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:22:52.732  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms
00:22:52.732  
00:22:52.732  --- 10.0.0.1 ping statistics ---
00:22:52.732  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:52.732  rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:22:52.732  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:22:52.732  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms
00:22:52.732  
00:22:52.732  --- 10.0.0.2 ping statistics ---
00:22:52.732  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:52.732  rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=104120
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 104120
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 104120 ']'
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:52.732  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:52.732   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:22:52.991  [2024-12-13 19:07:24.606670] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:22:52.991  [2024-12-13 19:07:24.606771] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:22:52.991  [2024-12-13 19:07:24.757841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:52.991  [2024-12-13 19:07:24.794926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:22:52.991  [2024-12-13 19:07:24.794991] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:22:52.991  [2024-12-13 19:07:24.795006] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:22:52.991  [2024-12-13 19:07:24.795017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:22:52.991  [2024-12-13 19:07:24.795026] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:22:52.991  [2024-12-13 19:07:24.795516] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:22:53.251   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:53.251   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0
00:22:53.251   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:22:53.251   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable
00:22:53.251   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:22:53.251   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:22:53.251   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0
00:22:53.251   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf
00:22:53.251   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1
00:22:53.251   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:53.251   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:22:53.251  [2024-12-13 19:07:24.980317] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:22:53.251   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:53.251   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a
00:22:53.251   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:53.251   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:22:53.251   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:53.251   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512
00:22:53.251   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:53.251   19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:22:53.251  Malloc0
00:22:53.251   19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:53.251   19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0
00:22:53.251   19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:53.251   19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:22:53.251   19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:53.251   19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420
00:22:53.251   19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:53.251   19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:22:53.251  [2024-12-13 19:07:25.019473] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:22:53.251   19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:53.251   19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=104152
00:22:53.251   19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420'
00:22:53.251   19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=104153
00:22:53.251   19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420'
00:22:53.251   19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=104154
00:22:53.251   19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420'
00:22:53.251   19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 104152
00:22:53.508  [2024-12-13 19:07:25.197716] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:22:53.508  [2024-12-13 19:07:25.207897] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:22:53.508  [2024-12-13 19:07:25.218132] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:22:54.443  Initializing NVMe Controllers
00:22:54.443  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0
00:22:54.443  Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2
00:22:54.443  Initialization complete. Launching workers.
00:22:54.443  ========================================================
00:22:54.443                                                                                                               Latency(us)
00:22:54.443  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:22:54.443  TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core  2:    3806.00      14.87     262.44     118.58     526.53
00:22:54.443  ========================================================
00:22:54.443  Total                                                                    :    3806.00      14.87     262.44     118.58     526.53
00:22:54.443  
00:22:54.443  Initializing NVMe Controllers
00:22:54.443  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0
00:22:54.443  Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1
00:22:54.443  Initialization complete. Launching workers.
00:22:54.443  ========================================================
00:22:54.443                                                                                                               Latency(us)
00:22:54.443  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:22:54.443  TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core  1:    3765.00      14.71     265.19     175.77     532.07
00:22:54.443  ========================================================
00:22:54.443  Total                                                                    :    3765.00      14.71     265.19     175.77     532.07
00:22:54.443  
00:22:54.443  Initializing NVMe Controllers
00:22:54.443  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0
00:22:54.443  Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3
00:22:54.443  Initialization complete. Launching workers.
00:22:54.443  ========================================================
00:22:54.443                                                                                                               Latency(us)
00:22:54.443  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:22:54.443  TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core  3:    3792.97      14.82     263.31     103.65     438.31
00:22:54.443  ========================================================
00:22:54.443  Total                                                                    :    3792.97      14.82     263.31     103.65     438.31
00:22:54.443  
00:22:54.443   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 104153
00:22:54.443   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 104154
00:22:54.443   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:22:54.443   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini
00:22:54.443   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup
00:22:54.443   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync
00:22:54.702   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:22:54.702   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e
00:22:54.702   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20}
00:22:54.702   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:22:54.702  rmmod nvme_tcp
00:22:54.702  rmmod nvme_fabrics
00:22:54.702  rmmod nvme_keyring
00:22:54.702   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:22:54.702   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e
00:22:54.702   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0
00:22:54.702   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 104120 ']'
00:22:54.702   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 104120
00:22:54.702   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 104120 ']'
00:22:54.702   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 104120
00:22:54.702    19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname
00:22:54.702   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:54.702    19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104120
00:22:54.702   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:22:54.702   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:22:54.702  killing process with pid 104120
00:22:54.702   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104120'
00:22:54.702   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 104120
00:22:54.702   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 104120
00:22:54.961   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:22:54.961   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:22:54.961   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:22:54.961   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr
00:22:54.961   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save
00:22:54.961   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:22:54.961   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore
00:22:54.961   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:22:54.961   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:22:54.961   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:22:54.961   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:22:54.961   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:22:54.961   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:22:54.961   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:22:54.961   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:22:54.961   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:22:54.961   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:22:54.961   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:22:54.961   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:22:54.961   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:22:54.961   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:22:54.961   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:22:55.230   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns
00:22:55.230   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:55.230   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:22:55.230    19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:55.230   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0
00:22:55.230  
00:22:55.230  real	0m2.900s
00:22:55.230  user	0m4.678s
00:22:55.230  sys	0m1.407s
00:22:55.230   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable
00:22:55.230   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:22:55.230  ************************************
00:22:55.230  END TEST nvmf_control_msg_list
00:22:55.230  ************************************
00:22:55.230   19:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp
00:22:55.230   19:07:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:22:55.230   19:07:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:22:55.230   19:07:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:22:55.230  ************************************
00:22:55.230  START TEST nvmf_wait_for_buf
00:22:55.230  ************************************
00:22:55.230   19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp
00:22:55.230  * Looking for test storage...
00:22:55.230  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:22:55.230    19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:22:55.231     19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version
00:22:55.231     19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:22:55.231    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:22:55.231    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:22:55.231    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l
00:22:55.231    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l
00:22:55.231    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-:
00:22:55.231    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1
00:22:55.231    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-:
00:22:55.231    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2
00:22:55.231    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<'
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 ))
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:22:55.503     19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1
00:22:55.503     19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1
00:22:55.503     19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:22:55.503     19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1
00:22:55.503     19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2
00:22:55.503     19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2
00:22:55.503     19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:22:55.503     19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:22:55.503  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:55.503  		--rc genhtml_branch_coverage=1
00:22:55.503  		--rc genhtml_function_coverage=1
00:22:55.503  		--rc genhtml_legend=1
00:22:55.503  		--rc geninfo_all_blocks=1
00:22:55.503  		--rc geninfo_unexecuted_blocks=1
00:22:55.503  		
00:22:55.503  		'
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:22:55.503  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:55.503  		--rc genhtml_branch_coverage=1
00:22:55.503  		--rc genhtml_function_coverage=1
00:22:55.503  		--rc genhtml_legend=1
00:22:55.503  		--rc geninfo_all_blocks=1
00:22:55.503  		--rc geninfo_unexecuted_blocks=1
00:22:55.503  		
00:22:55.503  		'
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:22:55.503  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:55.503  		--rc genhtml_branch_coverage=1
00:22:55.503  		--rc genhtml_function_coverage=1
00:22:55.503  		--rc genhtml_legend=1
00:22:55.503  		--rc geninfo_all_blocks=1
00:22:55.503  		--rc geninfo_unexecuted_blocks=1
00:22:55.503  		
00:22:55.503  		'
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:22:55.503  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:55.503  		--rc genhtml_branch_coverage=1
00:22:55.503  		--rc genhtml_function_coverage=1
00:22:55.503  		--rc genhtml_legend=1
00:22:55.503  		--rc geninfo_all_blocks=1
00:22:55.503  		--rc geninfo_unexecuted_blocks=1
00:22:55.503  		
00:22:55.503  		'
00:22:55.503   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:22:55.503     19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:22:55.503     19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:22:55.503    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:22:55.503     19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob
00:22:55.503     19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:22:55.503     19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:22:55.503     19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:22:55.503      19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:55.503      19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:55.503      19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:55.503      19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH
00:22:55.504      19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:55.504    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0
00:22:55.504    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:22:55.504    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:22:55.504    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:22:55.504    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:22:55.504    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:22:55.504    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:22:55.504  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:22:55.504    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:22:55.504    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:22:55.504    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:22:55.504    19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:22:55.504  Cannot find device "nvmf_init_br"
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:22:55.504  Cannot find device "nvmf_init_br2"
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:22:55.504  Cannot find device "nvmf_tgt_br"
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:22:55.504  Cannot find device "nvmf_tgt_br2"
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:22:55.504  Cannot find device "nvmf_init_br"
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:22:55.504  Cannot find device "nvmf_init_br2"
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:22:55.504  Cannot find device "nvmf_tgt_br"
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:22:55.504  Cannot find device "nvmf_tgt_br2"
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:22:55.504  Cannot find device "nvmf_br"
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:22:55.504  Cannot find device "nvmf_init_if"
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:22:55.504  Cannot find device "nvmf_init_if2"
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:22:55.504  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:22:55.504  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:22:55.504   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:22:55.764  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:22:55.764  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms
00:22:55.764  
00:22:55.764  --- 10.0.0.3 ping statistics ---
00:22:55.764  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:55.764  rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:22:55.764  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:22:55.764  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms
00:22:55.764  
00:22:55.764  --- 10.0.0.4 ping statistics ---
00:22:55.764  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:55.764  rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:22:55.764  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:22:55.764  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms
00:22:55.764  
00:22:55.764  --- 10.0.0.1 ping statistics ---
00:22:55.764  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:55.764  rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:22:55.764  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:22:55.764  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms
00:22:55.764  
00:22:55.764  --- 10.0.0.2 ping statistics ---
00:22:55.764  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:55.764  rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=104394
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 104394
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 104394 ']'
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:55.764   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:55.765  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:22:55.765   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:55.765   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:22:56.023  [2024-12-13 19:07:27.591955] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:22:56.023  [2024-12-13 19:07:27.592600] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:22:56.023  [2024-12-13 19:07:27.741820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:56.023  [2024-12-13 19:07:27.773670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:22:56.023  [2024-12-13 19:07:27.773740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:22:56.023  [2024-12-13 19:07:27.773766] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:22:56.023  [2024-12-13 19:07:27.773773] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:22:56.023  [2024-12-13 19:07:27.773780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:22:56.023  [2024-12-13 19:07:27.774142] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:22:56.282   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:56.282   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0
00:22:56.282   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:22:56.282   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable
00:22:56.282   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:22:56.282   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:22:56.282   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0
00:22:56.282   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf
00:22:56.282   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0
00:22:56.282   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:56.282   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:22:56.282   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:56.282   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192
00:22:56.282   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:56.282   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:22:56.282   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:56.282   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init
00:22:56.282   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:56.282   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:22:56.282   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:56.282   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512
00:22:56.282   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:56.282   19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:22:56.282  Malloc0
00:22:56.282   19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:56.282   19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24
00:22:56.282   19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:56.282   19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:22:56.282  [2024-12-13 19:07:28.013187] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:22:56.282   19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:56.282   19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001
00:22:56.282   19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:56.282   19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:22:56.282   19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:56.282   19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0
00:22:56.282   19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:56.282   19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:22:56.282   19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:56.282   19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420
00:22:56.282   19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:56.282   19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:22:56.282  [2024-12-13 19:07:28.037325] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:22:56.282   19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:56.282   19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420'
00:22:56.541  [2024-12-13 19:07:28.230376] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:22:57.918  Initializing NVMe Controllers
00:22:57.918  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0
00:22:57.918  Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0
00:22:57.918  Initialization complete. Launching workers.
00:22:57.918  ========================================================
00:22:57.918                                                                                                               Latency(us)
00:22:57.918  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:22:57.918  TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core  0:     129.00      16.12   32294.98    8022.25   61649.99
00:22:57.918  ========================================================
00:22:57.918  Total                                                                    :     129.00      16.12   32294.98    8022.25   61649.99
00:22:57.918  
00:22:57.918    19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats
00:22:57.918    19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry'
00:22:57.918    19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:57.918    19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:22:57.918    19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:57.918   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038
00:22:57.918   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]]
00:22:57.918   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:22:57.918   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini
00:22:57.918   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup
00:22:57.918   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync
00:22:57.918   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:22:57.918   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e
00:22:57.918   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20}
00:22:57.918   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:22:57.918  rmmod nvme_tcp
00:22:57.918  rmmod nvme_fabrics
00:22:57.918  rmmod nvme_keyring
00:22:57.918   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:22:57.918   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e
00:22:57.918   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0
00:22:57.918   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 104394 ']'
00:22:57.918   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 104394
00:22:57.918   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 104394 ']'
00:22:57.918   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 104394
00:22:57.918    19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname
00:22:58.177   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:58.177    19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104394
00:22:58.177   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:22:58.177   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:22:58.177  killing process with pid 104394
00:22:58.177   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104394'
00:22:58.177   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 104394
00:22:58.177   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 104394
00:22:58.177   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:22:58.178   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:22:58.178   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:22:58.178   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr
00:22:58.178   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save
00:22:58.178   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:22:58.178   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore
00:22:58.178   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:22:58.178   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:22:58.178   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:22:58.178   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:22:58.178   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:22:58.178   19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:22:58.437   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:22:58.437   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:22:58.437   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:22:58.437   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:22:58.437   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:22:58.437   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:22:58.437   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:22:58.437   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:22:58.437   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:22:58.437   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns
00:22:58.437   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:58.437   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:22:58.437    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:58.437   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0
00:22:58.437  
00:22:58.437  real	0m3.297s
00:22:58.437  user	0m2.663s
00:22:58.437  sys	0m0.792s
00:22:58.437   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:22:58.437  ************************************
00:22:58.437  END TEST nvmf_wait_for_buf
00:22:58.437  ************************************
00:22:58.437   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:22:58.437   19:07:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']'
00:22:58.437   19:07:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp
00:22:58.437   19:07:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:22:58.437   19:07:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:22:58.437   19:07:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:22:58.437  ************************************
00:22:58.437  START TEST nvmf_fuzz
00:22:58.437  ************************************
00:22:58.437   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp
00:22:58.696  * Looking for test storage...
00:22:58.696  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:22:58.696     19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version
00:22:58.696     19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-:
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-:
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<'
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 ))
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:22:58.696     19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1
00:22:58.696     19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1
00:22:58.696     19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:22:58.696     19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1
00:22:58.696     19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2
00:22:58.696     19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2
00:22:58.696     19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:22:58.696     19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:22:58.696  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:58.696  		--rc genhtml_branch_coverage=1
00:22:58.696  		--rc genhtml_function_coverage=1
00:22:58.696  		--rc genhtml_legend=1
00:22:58.696  		--rc geninfo_all_blocks=1
00:22:58.696  		--rc geninfo_unexecuted_blocks=1
00:22:58.696  		
00:22:58.696  		'
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:22:58.696  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:58.696  		--rc genhtml_branch_coverage=1
00:22:58.696  		--rc genhtml_function_coverage=1
00:22:58.696  		--rc genhtml_legend=1
00:22:58.696  		--rc geninfo_all_blocks=1
00:22:58.696  		--rc geninfo_unexecuted_blocks=1
00:22:58.696  		
00:22:58.696  		'
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:22:58.696  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:58.696  		--rc genhtml_branch_coverage=1
00:22:58.696  		--rc genhtml_function_coverage=1
00:22:58.696  		--rc genhtml_legend=1
00:22:58.696  		--rc geninfo_all_blocks=1
00:22:58.696  		--rc geninfo_unexecuted_blocks=1
00:22:58.696  		
00:22:58.696  		'
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:22:58.696  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:58.696  		--rc genhtml_branch_coverage=1
00:22:58.696  		--rc genhtml_function_coverage=1
00:22:58.696  		--rc genhtml_legend=1
00:22:58.696  		--rc geninfo_all_blocks=1
00:22:58.696  		--rc geninfo_unexecuted_blocks=1
00:22:58.696  		
00:22:58.696  		'
00:22:58.696   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:22:58.696     19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:22:58.696    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:22:58.697    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:22:58.697    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:22:58.697    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:22:58.697     19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:22:58.697    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:22:58.697    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:22:58.697    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:22:58.697    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:22:58.697    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:22:58.697    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:22:58.697    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:22:58.697     19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob
00:22:58.697     19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:22:58.697     19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:22:58.697     19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:22:58.697      19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:58.697      19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:58.697      19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:58.697      19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH
00:22:58.697      19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:58.697    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0
00:22:58.697    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:22:58.697    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:22:58.697    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:22:58.697    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:22:58.697    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:22:58.697    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:22:58.697  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:22:58.697    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:22:58.697    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:22:58.697    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:22:58.697    19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@460 -- # nvmf_veth_init
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:22:58.697  Cannot find device "nvmf_init_br"
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:22:58.697  Cannot find device "nvmf_init_br2"
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:22:58.697  Cannot find device "nvmf_tgt_br"
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:22:58.697  Cannot find device "nvmf_tgt_br2"
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:22:58.697  Cannot find device "nvmf_init_br"
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true
00:22:58.697   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:22:58.956  Cannot find device "nvmf_init_br2"
00:22:58.956   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true
00:22:58.956   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:22:58.956  Cannot find device "nvmf_tgt_br"
00:22:58.956   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true
00:22:58.956   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:22:58.956  Cannot find device "nvmf_tgt_br2"
00:22:58.956   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true
00:22:58.956   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:22:58.956  Cannot find device "nvmf_br"
00:22:58.956   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true
00:22:58.956   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:22:58.956  Cannot find device "nvmf_init_if"
00:22:58.956   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true
00:22:58.956   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:22:58.956  Cannot find device "nvmf_init_if2"
00:22:58.956   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true
00:22:58.956   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:22:58.956  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:22:58.956   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true
00:22:58.956   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:22:58.956  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:22:58.957   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:22:59.216  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:22:59.216  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms
00:22:59.216  
00:22:59.216  --- 10.0.0.3 ping statistics ---
00:22:59.216  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:59.216  rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:22:59.216  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:22:59.216  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms
00:22:59.216  
00:22:59.216  --- 10.0.0.4 ping statistics ---
00:22:59.216  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:59.216  rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:22:59.216  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:22:59.216  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms
00:22:59.216  
00:22:59.216  --- 10.0.0.1 ping statistics ---
00:22:59.216  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:59.216  rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:22:59.216  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:22:59.216  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms
00:22:59.216  
00:22:59.216  --- 10.0.0.2 ping statistics ---
00:22:59.216  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:59.216  rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@461 -- # return 0
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=104667
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 104667
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 104667 ']'
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:59.216  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:59.216   19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x
00:22:59.475  Malloc0
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420'
00:22:59.475   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a
00:22:59.734  Shutting down the fuzz application
00:22:59.734   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a
00:22:59.992  Shutting down the fuzz application
00:22:59.992   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:22:59.992   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:59.992   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x
00:23:00.251   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:00.251   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT
00:23:00.251   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini
00:23:00.251   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup
00:23:00.251   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync
00:23:00.251   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:23:00.251   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e
00:23:00.251   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20}
00:23:00.251   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:23:00.251  rmmod nvme_tcp
00:23:00.251  rmmod nvme_fabrics
00:23:00.251  rmmod nvme_keyring
00:23:00.251   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:23:00.251   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e
00:23:00.251   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0
00:23:00.251   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 104667 ']'
00:23:00.251   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 104667
00:23:00.251   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 104667 ']'
00:23:00.251   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 104667
00:23:00.251    19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname
00:23:00.251   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:00.251    19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104667
00:23:00.251  killing process with pid 104667
00:23:00.251   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:23:00.251   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:23:00.251   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104667'
00:23:00.251   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 104667
00:23:00.251   19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 104667
00:23:00.510   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:23:00.510   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:23:00.510   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:23:00.510   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr
00:23:00.510   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save
00:23:00.510   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:23:00.510   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore
00:23:00.510   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:23:00.510   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:23:00.510   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:23:00.510   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:23:00.510   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:23:00.510   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:23:00.510   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:23:00.510   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:23:00.510   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:23:00.510   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:23:00.510   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:23:00.510   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:23:00.511   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:23:00.769   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:23:00.769   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:23:00.769   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns
00:23:00.769   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:23:00.769   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:23:00.769    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:23:00.769   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0
00:23:00.769   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt
00:23:00.769  ************************************
00:23:00.769  END TEST nvmf_fuzz
00:23:00.769  ************************************
00:23:00.769  
00:23:00.769  real	0m2.218s
00:23:00.769  user	0m1.842s
00:23:00.769  sys	0m0.736s
00:23:00.770   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable
00:23:00.770   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x
00:23:00.770   19:07:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp
00:23:00.770   19:07:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:23:00.770   19:07:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:23:00.770   19:07:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:23:00.770  ************************************
00:23:00.770  START TEST nvmf_multiconnection
00:23:00.770  ************************************
00:23:00.770   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp
00:23:00.770  * Looking for test storage...
00:23:00.770  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:23:00.770    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:23:00.770     19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version
00:23:00.770     19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:23:01.029    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-:
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-:
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<'
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 ))
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:23:01.030     19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1
00:23:01.030     19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1
00:23:01.030     19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:23:01.030     19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1
00:23:01.030     19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2
00:23:01.030     19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2
00:23:01.030     19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:23:01.030     19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:23:01.030  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:01.030  		--rc genhtml_branch_coverage=1
00:23:01.030  		--rc genhtml_function_coverage=1
00:23:01.030  		--rc genhtml_legend=1
00:23:01.030  		--rc geninfo_all_blocks=1
00:23:01.030  		--rc geninfo_unexecuted_blocks=1
00:23:01.030  		
00:23:01.030  		'
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:23:01.030  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:01.030  		--rc genhtml_branch_coverage=1
00:23:01.030  		--rc genhtml_function_coverage=1
00:23:01.030  		--rc genhtml_legend=1
00:23:01.030  		--rc geninfo_all_blocks=1
00:23:01.030  		--rc geninfo_unexecuted_blocks=1
00:23:01.030  		
00:23:01.030  		'
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:23:01.030  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:01.030  		--rc genhtml_branch_coverage=1
00:23:01.030  		--rc genhtml_function_coverage=1
00:23:01.030  		--rc genhtml_legend=1
00:23:01.030  		--rc geninfo_all_blocks=1
00:23:01.030  		--rc geninfo_unexecuted_blocks=1
00:23:01.030  		
00:23:01.030  		'
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:23:01.030  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:01.030  		--rc genhtml_branch_coverage=1
00:23:01.030  		--rc genhtml_function_coverage=1
00:23:01.030  		--rc genhtml_legend=1
00:23:01.030  		--rc geninfo_all_blocks=1
00:23:01.030  		--rc geninfo_unexecuted_blocks=1
00:23:01.030  		
00:23:01.030  		'
00:23:01.030   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:23:01.030     19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:23:01.030     19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:23:01.030     19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob
00:23:01.030     19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:23:01.030     19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:23:01.030     19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:23:01.030      19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:01.030      19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:01.030      19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:01.030      19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH
00:23:01.030      19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:23:01.030  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:23:01.030    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0
00:23:01.030   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64
00:23:01.030   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:23:01.030   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11
00:23:01.030   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:23:01.031    19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@460 -- # nvmf_veth_init
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:23:01.031  Cannot find device "nvmf_init_br"
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:23:01.031  Cannot find device "nvmf_init_br2"
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:23:01.031  Cannot find device "nvmf_tgt_br"
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:23:01.031  Cannot find device "nvmf_tgt_br2"
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:23:01.031  Cannot find device "nvmf_init_br"
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:23:01.031  Cannot find device "nvmf_init_br2"
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:23:01.031  Cannot find device "nvmf_tgt_br"
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:23:01.031  Cannot find device "nvmf_tgt_br2"
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:23:01.031  Cannot find device "nvmf_br"
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:23:01.031  Cannot find device "nvmf_init_if"
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:23:01.031  Cannot find device "nvmf_init_if2"
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:23:01.031  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:23:01.031  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true
00:23:01.031   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:23:01.290   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:23:01.290   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:23:01.290   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:23:01.290   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:23:01.290   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:23:01.290   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:23:01.290   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:23:01.290   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:23:01.290   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:23:01.290   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:23:01.290   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:23:01.290   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:23:01.290   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:23:01.290   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:23:01.290   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:23:01.290   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:23:01.290   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:23:01.290   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:23:01.290   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:23:01.290   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:23:01.290   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:23:01.290   19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:23:01.290   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:23:01.290   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:23:01.290   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:23:01.290   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:23:01.290   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:23:01.290   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:23:01.290   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:23:01.290   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:23:01.290   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:23:01.290   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:23:01.290  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:23:01.290  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms
00:23:01.290  
00:23:01.290  --- 10.0.0.3 ping statistics ---
00:23:01.290  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:23:01.290  rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms
00:23:01.290   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:23:01.291  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:23:01.291  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms
00:23:01.291  
00:23:01.291  --- 10.0.0.4 ping statistics ---
00:23:01.291  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:23:01.291  rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms
00:23:01.291   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:23:01.291  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:23:01.291  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms
00:23:01.291  
00:23:01.291  --- 10.0.0.1 ping statistics ---
00:23:01.291  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:23:01.291  rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms
00:23:01.291   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:23:01.291  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:23:01.291  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms
00:23:01.291  
00:23:01.291  --- 10.0.0.2 ping statistics ---
00:23:01.291  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:23:01.291  rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms
00:23:01.291   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:23:01.291   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@461 -- # return 0
00:23:01.291   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:23:01.291   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:23:01.291   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:23:01.291   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:23:01.291   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:23:01.291   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:23:01.291   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:23:01.555   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF
00:23:01.555   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:23:01.555   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable
00:23:01.555   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:01.555  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:23:01.555   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=104909
00:23:01.555   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:23:01.555   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 104909
00:23:01.555   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 104909 ']'
00:23:01.555   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:23:01.555   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:01.555   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:23:01.555   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:01.555   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:01.555  [2024-12-13 19:07:33.185116] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:23:01.555  [2024-12-13 19:07:33.186002] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:23:01.555  [2024-12-13 19:07:33.336434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:23:01.555  [2024-12-13 19:07:33.373384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:23:01.555  [2024-12-13 19:07:33.373761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:23:01.555  [2024-12-13 19:07:33.373901] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:23:01.555  [2024-12-13 19:07:33.373952] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:23:01.555  [2024-12-13 19:07:33.373980] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:23:01.555  [2024-12-13 19:07:33.375307] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:23:01.555  [2024-12-13 19:07:33.375367] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:23:01.555  [2024-12-13 19:07:33.375503] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:23:01.819  [2024-12-13 19:07:33.375507] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:01.820  [2024-12-13 19:07:33.559410] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:01.820    19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:01.820  Malloc1
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:01.820   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:01.820  [2024-12-13 19:07:33.641222] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.079  Malloc2
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.079  Malloc3
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.079  Malloc4
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.079  Malloc5
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.079   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5
00:23:02.080   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.080   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.080   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.080   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5
00:23:02.080   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.080   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.080   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.080   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420
00:23:02.080   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.080   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.080   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.080   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:02.080   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6
00:23:02.080   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.080   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.080  Malloc6
00:23:02.080   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.080   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6
00:23:02.080   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.080   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.080   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.080   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6
00:23:02.080   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.080   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.339   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.339   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420
00:23:02.339   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.339   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.339   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.339   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:02.339   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7
00:23:02.339   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.339   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.339  Malloc7
00:23:02.340   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.340   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7
00:23:02.340   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.340   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.340   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.340   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7
00:23:02.340   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.340   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.340   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.340   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420
00:23:02.340   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.340   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.340   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.340   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:02.340   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8
00:23:02.340   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.340   19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.340  Malloc8
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.340  Malloc9
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.340  Malloc10
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.340  Malloc11
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.340   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.599   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.599   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11
00:23:02.599   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.599   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.599   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.599   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420
00:23:02.599   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:02.599   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:02.599   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:02.599    19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11
00:23:02.599   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:02.599   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420
00:23:02.599   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1
00:23:02.599   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0
00:23:02.599   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:23:02.599   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:23:02.599   19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2
00:23:05.130   19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:23:05.130    19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:23:05.130    19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1
00:23:05.130   19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:23:05.130   19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:23:05.130   19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0
00:23:05.130   19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:05.130   19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420
00:23:05.130   19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2
00:23:05.130   19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0
00:23:05.130   19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:23:05.130   19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:23:05.130   19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2
00:23:07.032   19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:23:07.032    19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:23:07.032    19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2
00:23:07.032   19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:23:07.032   19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:23:07.032   19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0
00:23:07.032   19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:07.032   19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420
00:23:07.032   19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3
00:23:07.032   19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0
00:23:07.032   19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:23:07.032   19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:23:07.032   19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2
00:23:09.567   19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:23:09.567    19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:23:09.567    19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3
00:23:09.567   19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:23:09.567   19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:23:09.567   19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0
00:23:09.567   19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:09.567   19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420
00:23:09.567   19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4
00:23:09.567   19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0
00:23:09.567   19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:23:09.567   19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:23:09.567   19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2
00:23:11.471   19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:23:11.471    19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:23:11.471    19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4
00:23:11.471   19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:23:11.471   19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:23:11.471   19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0
00:23:11.471   19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:11.471   19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420
00:23:11.471   19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5
00:23:11.471   19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0
00:23:11.471   19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:23:11.471   19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:23:11.471   19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2
00:23:13.375   19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:23:13.375    19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:23:13.375    19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5
00:23:13.633   19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:23:13.633   19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:23:13.633   19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0
00:23:13.633   19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:13.633   19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420
00:23:13.633   19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6
00:23:13.633   19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0
00:23:13.633   19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:23:13.633   19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:23:13.633   19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2
00:23:16.167   19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:23:16.167    19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:23:16.167    19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6
00:23:16.167   19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:23:16.167   19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:23:16.167   19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0
00:23:16.167   19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:16.167   19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420
00:23:16.167   19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7
00:23:16.167   19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0
00:23:16.167   19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:23:16.167   19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:23:16.167   19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2
00:23:18.071   19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:23:18.071    19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:23:18.071    19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7
00:23:18.071   19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:23:18.071   19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:23:18.071   19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0
00:23:18.071   19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:18.071   19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420
00:23:18.071   19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8
00:23:18.071   19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0
00:23:18.071   19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:23:18.071   19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:23:18.071   19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2
00:23:19.976   19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:23:20.235    19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:23:20.235    19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8
00:23:20.235   19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:23:20.235   19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:23:20.235   19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0
00:23:20.235   19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:20.235   19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420
00:23:20.235   19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9
00:23:20.235   19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0
00:23:20.235   19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:23:20.235   19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:23:20.235   19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2
00:23:22.775   19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:23:22.775    19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:23:22.775    19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9
00:23:22.775   19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:23:22.775   19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:23:22.775   19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0
00:23:22.775   19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:22.775   19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420
00:23:22.775   19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10
00:23:22.775   19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0
00:23:22.775   19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:23:22.775   19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:23:22.775   19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2
00:23:24.775   19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:23:24.775    19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:23:24.775    19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10
00:23:24.775   19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:23:24.775   19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:23:24.775   19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0
00:23:24.775   19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:24.775   19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420
00:23:24.775   19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11
00:23:24.775   19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0
00:23:24.775   19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:23:24.775   19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:23:24.775   19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2
00:23:26.680   19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:23:26.680    19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:23:26.680    19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11
00:23:26.680   19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:23:26.680   19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:23:26.680   19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0
00:23:26.680   19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10
00:23:26.680  [global]
00:23:26.680  thread=1
00:23:26.680  invalidate=1
00:23:26.680  rw=read
00:23:26.680  time_based=1
00:23:26.680  runtime=10
00:23:26.680  ioengine=libaio
00:23:26.680  direct=1
00:23:26.680  bs=262144
00:23:26.680  iodepth=64
00:23:26.680  norandommap=1
00:23:26.680  numjobs=1
00:23:26.680  
00:23:26.680  [job0]
00:23:26.680  filename=/dev/nvme0n1
00:23:26.680  [job1]
00:23:26.680  filename=/dev/nvme10n1
00:23:26.680  [job2]
00:23:26.680  filename=/dev/nvme1n1
00:23:26.939  [job3]
00:23:26.939  filename=/dev/nvme2n1
00:23:26.939  [job4]
00:23:26.939  filename=/dev/nvme3n1
00:23:26.939  [job5]
00:23:26.939  filename=/dev/nvme4n1
00:23:26.939  [job6]
00:23:26.939  filename=/dev/nvme5n1
00:23:26.939  [job7]
00:23:26.939  filename=/dev/nvme6n1
00:23:26.939  [job8]
00:23:26.939  filename=/dev/nvme7n1
00:23:26.939  [job9]
00:23:26.939  filename=/dev/nvme8n1
00:23:26.939  [job10]
00:23:26.939  filename=/dev/nvme9n1
00:23:26.939  Could not set queue depth (nvme0n1)
00:23:26.939  Could not set queue depth (nvme10n1)
00:23:26.939  Could not set queue depth (nvme1n1)
00:23:26.939  Could not set queue depth (nvme2n1)
00:23:26.939  Could not set queue depth (nvme3n1)
00:23:26.939  Could not set queue depth (nvme4n1)
00:23:26.939  Could not set queue depth (nvme5n1)
00:23:26.939  Could not set queue depth (nvme6n1)
00:23:26.939  Could not set queue depth (nvme7n1)
00:23:26.939  Could not set queue depth (nvme8n1)
00:23:26.939  Could not set queue depth (nvme9n1)
00:23:27.199  job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:23:27.199  job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:23:27.199  job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:23:27.199  job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:23:27.199  job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:23:27.199  job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:23:27.199  job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:23:27.199  job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:23:27.199  job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:23:27.199  job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:23:27.199  job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:23:27.199  fio-3.35
00:23:27.199  Starting 11 threads
00:23:39.413  
00:23:39.413  job0: (groupid=0, jobs=1): err= 0: pid=105366: Fri Dec 13 19:08:09 2024
00:23:39.413    read: IOPS=345, BW=86.4MiB/s (90.6MB/s)(885MiB/10232msec)
00:23:39.413      slat (usec): min=13, max=599744, avg=2721.06, stdev=22819.19
00:23:39.413      clat (usec): min=1600, max=1531.8k, avg=181892.38, stdev=286674.58
00:23:39.413       lat (usec): min=1676, max=1531.8k, avg=184613.44, stdev=290820.95
00:23:39.413      clat percentiles (msec):
00:23:39.413       |  1.00th=[    3],  5.00th=[   23], 10.00th=[   56], 20.00th=[   62],
00:23:39.413       | 30.00th=[   65], 40.00th=[   67], 50.00th=[   69], 60.00th=[   70],
00:23:39.413       | 70.00th=[   73], 80.00th=[   85], 90.00th=[  625], 95.00th=[  885],
00:23:39.413       | 99.00th=[ 1284], 99.50th=[ 1435], 99.90th=[ 1536], 99.95th=[ 1536],
00:23:39.413       | 99.99th=[ 1536]
00:23:39.413     bw (  KiB/s): min=10752, max=254464, per=12.57%, avg=88896.55, stdev=100882.45, samples=20
00:23:39.413     iops        : min=   42, max=  994, avg=347.10, stdev=394.15, samples=20
00:23:39.413    lat (msec)   : 2=0.06%, 4=3.96%, 10=0.25%, 20=0.37%, 50=4.15%
00:23:39.413    lat (msec)   : 100=71.82%, 250=1.10%, 500=5.91%, 750=4.10%, 1000=4.58%
00:23:39.413    lat (msec)   : 2000=3.70%
00:23:39.413    cpu          : usr=0.24%, sys=1.69%, ctx=1226, majf=0, minf=4097
00:23:39.413    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2%
00:23:39.413       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:23:39.413       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:23:39.413       issued rwts: total=3538,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:23:39.413       latency   : target=0, window=0, percentile=100.00%, depth=64
00:23:39.413  job1: (groupid=0, jobs=1): err= 0: pid=105367: Fri Dec 13 19:08:09 2024
00:23:39.413    read: IOPS=95, BW=23.8MiB/s (25.0MB/s)(244MiB/10234msec)
00:23:39.413      slat (usec): min=20, max=500231, avg=9892.10, stdev=42915.41
00:23:39.413      clat (msec): min=44, max=1112, avg=659.38, stdev=254.13
00:23:39.413       lat (msec): min=46, max=1401, avg=669.27, stdev=260.83
00:23:39.413      clat percentiles (msec):
00:23:39.413       |  1.00th=[   47],  5.00th=[   86], 10.00th=[  108], 20.00th=[  550],
00:23:39.413       | 30.00th=[  634], 40.00th=[  676], 50.00th=[  701], 60.00th=[  760],
00:23:39.413       | 70.00th=[  802], 80.00th=[  852], 90.00th=[  919], 95.00th=[  978],
00:23:39.413       | 99.00th=[ 1062], 99.50th=[ 1116], 99.90th=[ 1116], 99.95th=[ 1116],
00:23:39.413       | 99.99th=[ 1116]
00:23:39.413     bw (  KiB/s): min= 3072, max=54163, per=3.30%, avg=23328.40, stdev=11410.48, samples=20
00:23:39.413     iops        : min=   12, max=  211, avg=90.95, stdev=44.50, samples=20
00:23:39.413    lat (msec)   : 50=1.74%, 100=7.89%, 250=2.05%, 500=4.82%, 750=41.19%
00:23:39.413    lat (msec)   : 1000=39.55%, 2000=2.77%
00:23:39.413    cpu          : usr=0.05%, sys=0.65%, ctx=258, majf=0, minf=4097
00:23:39.413    IO depths    : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.5%
00:23:39.413       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:23:39.413       complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:23:39.413       issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:23:39.413       latency   : target=0, window=0, percentile=100.00%, depth=64
00:23:39.413  job2: (groupid=0, jobs=1): err= 0: pid=105368: Fri Dec 13 19:08:09 2024
00:23:39.413    read: IOPS=67, BW=16.9MiB/s (17.7MB/s)(173MiB/10231msec)
00:23:39.413      slat (usec): min=18, max=599174, avg=14704.11, stdev=65519.71
00:23:39.413      clat (msec): min=51, max=1522, avg=929.53, stdev=247.61
00:23:39.413       lat (msec): min=298, max=1522, avg=944.24, stdev=253.10
00:23:39.413      clat percentiles (msec):
00:23:39.413       |  1.00th=[  443],  5.00th=[  642], 10.00th=[  667], 20.00th=[  701],
00:23:39.413       | 30.00th=[  735], 40.00th=[  793], 50.00th=[  911], 60.00th=[ 1003],
00:23:39.413       | 70.00th=[ 1070], 80.00th=[ 1183], 90.00th=[ 1267], 95.00th=[ 1368],
00:23:39.413       | 99.00th=[ 1502], 99.50th=[ 1502], 99.90th=[ 1519], 99.95th=[ 1519],
00:23:39.413       | 99.99th=[ 1519]
00:23:39.413     bw (  KiB/s): min= 2048, max=32256, per=2.39%, avg=16918.95, stdev=8521.25, samples=19
00:23:39.413     iops        : min=    8, max=  126, avg=65.95, stdev=33.22, samples=19
00:23:39.413    lat (msec)   : 100=0.14%, 500=2.02%, 750=33.09%, 1000=24.86%, 2000=39.88%
00:23:39.413    cpu          : usr=0.01%, sys=0.46%, ctx=116, majf=0, minf=4097
00:23:39.413    IO depths    : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.6%, >=64=90.9%
00:23:39.413       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:23:39.413       complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0%
00:23:39.413       issued rwts: total=692,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:23:39.413       latency   : target=0, window=0, percentile=100.00%, depth=64
00:23:39.413  job3: (groupid=0, jobs=1): err= 0: pid=105369: Fri Dec 13 19:08:09 2024
00:23:39.413    read: IOPS=465, BW=116MiB/s (122MB/s)(1190MiB/10237msec)
00:23:39.413      slat (usec): min=17, max=230707, avg=1864.99, stdev=10591.69
00:23:39.414      clat (msec): min=2, max=877, avg=135.34, stdev=111.93
00:23:39.414       lat (msec): min=2, max=882, avg=137.20, stdev=113.29
00:23:39.414      clat percentiles (msec):
00:23:39.414       |  1.00th=[   11],  5.00th=[   22], 10.00th=[   28], 20.00th=[   68],
00:23:39.414       | 30.00th=[   73], 40.00th=[   79], 50.00th=[  126], 60.00th=[  134],
00:23:39.414       | 70.00th=[  148], 80.00th=[  199], 90.00th=[  239], 95.00th=[  330],
00:23:39.414       | 99.00th=[  506], 99.50th=[  860], 99.90th=[  877], 99.95th=[  877],
00:23:39.414       | 99.99th=[  877]
00:23:39.414     bw (  KiB/s): min=32191, max=251392, per=16.99%, avg=120160.70, stdev=69854.82, samples=20
00:23:39.414     iops        : min=  125, max=  982, avg=469.25, stdev=272.85, samples=20
00:23:39.414    lat (msec)   : 4=0.25%, 10=0.69%, 20=3.44%, 50=9.20%, 100=34.19%
00:23:39.414    lat (msec)   : 250=43.16%, 500=7.33%, 750=1.11%, 1000=0.61%
00:23:39.414    cpu          : usr=0.23%, sys=1.94%, ctx=1284, majf=0, minf=4097
00:23:39.414    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7%
00:23:39.414       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:23:39.414       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:23:39.414       issued rwts: total=4761,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:23:39.414       latency   : target=0, window=0, percentile=100.00%, depth=64
00:23:39.414  job4: (groupid=0, jobs=1): err= 0: pid=105370: Fri Dec 13 19:08:09 2024
00:23:39.414    read: IOPS=501, BW=125MiB/s (132MB/s)(1264MiB/10073msec)
00:23:39.414      slat (usec): min=14, max=322492, avg=1715.01, stdev=9337.00
00:23:39.414      clat (msec): min=2, max=865, avg=125.49, stdev=97.31
00:23:39.414       lat (msec): min=2, max=865, avg=127.21, stdev=98.19
00:23:39.414      clat percentiles (msec):
00:23:39.414       |  1.00th=[   28],  5.00th=[   45], 10.00th=[   49], 20.00th=[   55],
00:23:39.414       | 30.00th=[   58], 40.00th=[   79], 50.00th=[  127], 60.00th=[  136],
00:23:39.414       | 70.00th=[  144], 80.00th=[  157], 90.00th=[  194], 95.00th=[  247],
00:23:39.414       | 99.00th=[  506], 99.50th=[  542], 99.90th=[  785], 99.95th=[  785],
00:23:39.414       | 99.99th=[  869]
00:23:39.414     bw (  KiB/s): min=19928, max=302916, per=18.05%, avg=127706.00, stdev=80402.95, samples=20
00:23:39.414     iops        : min=   77, max= 1183, avg=498.75, stdev=314.14, samples=20
00:23:39.414    lat (msec)   : 4=0.04%, 10=0.14%, 20=0.22%, 50=11.97%, 100=29.38%
00:23:39.414    lat (msec)   : 250=53.55%, 500=3.26%, 750=0.95%, 1000=0.49%
00:23:39.414    cpu          : usr=0.31%, sys=2.11%, ctx=1138, majf=0, minf=4097
00:23:39.414    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8%
00:23:39.414       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:23:39.414       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:23:39.414       issued rwts: total=5055,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:23:39.414       latency   : target=0, window=0, percentile=100.00%, depth=64
00:23:39.414  job5: (groupid=0, jobs=1): err= 0: pid=105371: Fri Dec 13 19:08:09 2024
00:23:39.414    read: IOPS=102, BW=25.6MiB/s (26.8MB/s)(262MiB/10242msec)
00:23:39.414      slat (usec): min=22, max=615516, avg=8683.74, stdev=43602.02
00:23:39.414      clat (usec): min=988, max=1305.7k, avg=615764.87, stdev=298734.01
00:23:39.414       lat (usec): min=1014, max=1425.0k, avg=624448.61, stdev=304547.24
00:23:39.414      clat percentiles (usec):
00:23:39.414       |  1.00th=[   1369],  5.00th=[   2507], 10.00th=[  87557],
00:23:39.414       | 20.00th=[ 396362], 30.00th=[ 492831], 40.00th=[ 616563],
00:23:39.414       | 50.00th=[ 666895], 60.00th=[ 725615], 70.00th=[ 784335],
00:23:39.414       | 80.00th=[ 859833], 90.00th=[ 935330], 95.00th=[1002439],
00:23:39.414       | 99.00th=[1233126], 99.50th=[1300235], 99.90th=[1300235],
00:23:39.414       | 99.95th=[1300235], 99.99th=[1300235]
00:23:39.414     bw (  KiB/s): min= 7680, max=81920, per=3.55%, avg=25147.90, stdev=15886.92, samples=20
00:23:39.414     iops        : min=   30, max=  320, avg=98.05, stdev=62.08, samples=20
00:23:39.414    lat (usec)   : 1000=0.10%
00:23:39.414    lat (msec)   : 2=3.25%, 4=2.87%, 10=0.67%, 20=0.86%, 50=0.86%
00:23:39.414    lat (msec)   : 100=3.34%, 250=1.53%, 500=17.29%, 750=31.04%, 1000=32.86%
00:23:39.414    lat (msec)   : 2000=5.35%
00:23:39.414    cpu          : usr=0.05%, sys=0.56%, ctx=277, majf=0, minf=4097
00:23:39.414    IO depths    : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=94.0%
00:23:39.414       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:23:39.414       complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:23:39.414       issued rwts: total=1047,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:23:39.414       latency   : target=0, window=0, percentile=100.00%, depth=64
00:23:39.414  job6: (groupid=0, jobs=1): err= 0: pid=105372: Fri Dec 13 19:08:09 2024
00:23:39.414    read: IOPS=128, BW=32.1MiB/s (33.7MB/s)(329MiB/10227msec)
00:23:39.414      slat (usec): min=21, max=473947, avg=7530.12, stdev=43500.21
00:23:39.414      clat (msec): min=2, max=993, avg=488.90, stdev=338.30
00:23:39.414       lat (msec): min=2, max=1326, avg=496.43, stdev=345.64
00:23:39.414      clat percentiles (msec):
00:23:39.414       |  1.00th=[   23],  5.00th=[   36], 10.00th=[   42], 20.00th=[   73],
00:23:39.414       | 30.00th=[  100], 40.00th=[  542], 50.00th=[  625], 60.00th=[  701],
00:23:39.414       | 70.00th=[  768], 80.00th=[  802], 90.00th=[  869], 95.00th=[  894],
00:23:39.414       | 99.00th=[  978], 99.50th=[  978], 99.90th=[  995], 99.95th=[  995],
00:23:39.414       | 99.99th=[  995]
00:23:39.414     bw (  KiB/s): min= 3584, max=207360, per=4.52%, avg=31991.05, stdev=42389.27, samples=20
00:23:39.414     iops        : min=   14, max=  810, avg=124.85, stdev=165.59, samples=20
00:23:39.414    lat (msec)   : 4=0.15%, 10=0.38%, 20=0.38%, 50=11.11%, 100=18.04%
00:23:39.414    lat (msec)   : 250=7.61%, 750=31.35%, 1000=30.97%
00:23:39.414    cpu          : usr=0.09%, sys=0.73%, ctx=268, majf=0, minf=4097
00:23:39.414    IO depths    : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.2%
00:23:39.414       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:23:39.414       complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:23:39.414       issued rwts: total=1314,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:23:39.414       latency   : target=0, window=0, percentile=100.00%, depth=64
00:23:39.414  job7: (groupid=0, jobs=1): err= 0: pid=105373: Fri Dec 13 19:08:09 2024
00:23:39.414    read: IOPS=92, BW=23.0MiB/s (24.1MB/s)(235MiB/10224msec)
00:23:39.414      slat (usec): min=13, max=461602, avg=10281.65, stdev=46905.73
00:23:39.414      clat (msec): min=38, max=1218, avg=683.18, stdev=179.52
00:23:39.414       lat (msec): min=40, max=1266, avg=693.46, stdev=187.06
00:23:39.414      clat percentiles (msec):
00:23:39.414       |  1.00th=[   84],  5.00th=[  296], 10.00th=[  397], 20.00th=[  567],
00:23:39.414       | 30.00th=[  659], 40.00th=[  676], 50.00th=[  718], 60.00th=[  743],
00:23:39.414       | 70.00th=[  802], 80.00th=[  827], 90.00th=[  869], 95.00th=[  877],
00:23:39.414       | 99.00th=[  953], 99.50th=[ 1217], 99.90th=[ 1217], 99.95th=[ 1217],
00:23:39.414       | 99.99th=[ 1217]
00:23:39.414     bw (  KiB/s): min= 7168, max=32768, per=3.17%, avg=22444.40, stdev=8118.51, samples=20
00:23:39.414     iops        : min=   28, max=  128, avg=87.55, stdev=31.72, samples=20
00:23:39.414    lat (msec)   : 50=0.11%, 100=0.96%, 250=2.23%, 500=8.50%, 750=48.57%
00:23:39.414    lat (msec)   : 1000=39.11%, 2000=0.53%
00:23:39.414    cpu          : usr=0.03%, sys=0.56%, ctx=119, majf=0, minf=4097
00:23:39.414    IO depths    : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.4%, >=64=93.3%
00:23:39.414       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:23:39.414       complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:23:39.414       issued rwts: total=941,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:23:39.414       latency   : target=0, window=0, percentile=100.00%, depth=64
00:23:39.414  job8: (groupid=0, jobs=1): err= 0: pid=105374: Fri Dec 13 19:08:09 2024
00:23:39.414    read: IOPS=352, BW=88.1MiB/s (92.4MB/s)(887MiB/10058msec)
00:23:39.414      slat (usec): min=15, max=692851, avg=2272.74, stdev=23391.22
00:23:39.414      clat (msec): min=2, max=1497, avg=178.81, stdev=225.82
00:23:39.414       lat (msec): min=2, max=1500, avg=181.08, stdev=229.73
00:23:39.414      clat percentiles (msec):
00:23:39.414       |  1.00th=[    8],  5.00th=[   18], 10.00th=[   44], 20.00th=[   50],
00:23:39.414       | 30.00th=[   53], 40.00th=[   56], 50.00th=[   67], 60.00th=[   99],
00:23:39.414       | 70.00th=[  220], 80.00th=[  271], 90.00th=[  527], 95.00th=[  701],
00:23:39.414       | 99.00th=[  911], 99.50th=[ 1418], 99.90th=[ 1502], 99.95th=[ 1502],
00:23:39.414       | 99.99th=[ 1502]
00:23:39.414     bw (  KiB/s): min= 6656, max=297472, per=13.26%, avg=93827.79, stdev=88699.72, samples=19
00:23:39.414     iops        : min=   26, max= 1162, avg=366.37, stdev=346.53, samples=19
00:23:39.414    lat (msec)   : 4=0.11%, 10=3.07%, 20=1.95%, 50=15.85%, 100=39.31%
00:23:39.414    lat (msec)   : 250=16.41%, 500=12.89%, 750=5.84%, 1000=3.81%, 2000=0.76%
00:23:39.414    cpu          : usr=0.25%, sys=1.66%, ctx=1255, majf=0, minf=4097
00:23:39.414    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2%
00:23:39.414       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:23:39.414       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:23:39.414       issued rwts: total=3546,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:23:39.414       latency   : target=0, window=0, percentile=100.00%, depth=64
00:23:39.414  job9: (groupid=0, jobs=1): err= 0: pid=105375: Fri Dec 13 19:08:09 2024
00:23:39.414    read: IOPS=78, BW=19.7MiB/s (20.7MB/s)(202MiB/10226msec)
00:23:39.414      slat (usec): min=15, max=408272, avg=12477.86, stdev=48943.34
00:23:39.414      clat (msec): min=28, max=1514, avg=796.28, stdev=317.39
00:23:39.414       lat (msec): min=28, max=1514, avg=808.76, stdev=322.64
00:23:39.414      clat percentiles (msec):
00:23:39.414       |  1.00th=[   32],  5.00th=[   41], 10.00th=[  435], 20.00th=[  625],
00:23:39.414       | 30.00th=[  642], 40.00th=[  701], 50.00th=[  776], 60.00th=[  877],
00:23:39.414       | 70.00th=[  936], 80.00th=[ 1053], 90.00th=[ 1217], 95.00th=[ 1318],
00:23:39.414       | 99.00th=[ 1452], 99.50th=[ 1519], 99.90th=[ 1519], 99.95th=[ 1519],
00:23:39.414       | 99.99th=[ 1519]
00:23:39.414     bw (  KiB/s): min= 5632, max=32256, per=2.69%, avg=19016.05, stdev=7987.90, samples=20
00:23:39.414     iops        : min=   22, max=  126, avg=74.15, stdev=31.13, samples=20
00:23:39.414    lat (msec)   : 50=5.82%, 250=0.12%, 500=7.81%, 750=33.58%, 1000=29.37%
00:23:39.414    lat (msec)   : 2000=23.30%
00:23:39.414    cpu          : usr=0.04%, sys=0.44%, ctx=172, majf=0, minf=4097
00:23:39.414    IO depths    : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2%
00:23:39.414       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:23:39.414       complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:23:39.414       issued rwts: total=807,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:23:39.414       latency   : target=0, window=0, percentile=100.00%, depth=64
00:23:39.414  job10: (groupid=0, jobs=1): err= 0: pid=105376: Fri Dec 13 19:08:09 2024
00:23:39.414    read: IOPS=558, BW=140MiB/s (146MB/s)(1407MiB/10080msec)
00:23:39.414      slat (usec): min=19, max=400378, avg=1747.88, stdev=10324.38
00:23:39.414      clat (msec): min=26, max=635, avg=112.66, stdev=94.27
00:23:39.414       lat (msec): min=27, max=660, avg=114.41, stdev=95.84
00:23:39.414      clat percentiles (msec):
00:23:39.414       |  1.00th=[   40],  5.00th=[   47], 10.00th=[   51], 20.00th=[   56],
00:23:39.414       | 30.00th=[   61], 40.00th=[   64], 50.00th=[   69], 60.00th=[   77],
00:23:39.414       | 70.00th=[  123], 80.00th=[  180], 90.00th=[  228], 95.00th=[  268],
00:23:39.414       | 99.00th=[  498], 99.50th=[  514], 99.90th=[  634], 99.95th=[  634],
00:23:39.415       | 99.99th=[  634]
00:23:39.415     bw (  KiB/s): min=32256, max=276439, per=20.12%, avg=142357.15, stdev=94972.08, samples=20
00:23:39.415     iops        : min=  126, max= 1079, avg=556.00, stdev=370.96, samples=20
00:23:39.415    lat (msec)   : 50=8.73%, 100=60.74%, 250=24.07%, 500=5.78%, 750=0.69%
00:23:39.415    cpu          : usr=0.22%, sys=2.12%, ctx=1152, majf=0, minf=4097
00:23:39.415    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9%
00:23:39.415       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:23:39.415       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:23:39.415       issued rwts: total=5626,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:23:39.415       latency   : target=0, window=0, percentile=100.00%, depth=64
00:23:39.415  
00:23:39.415  Run status group 0 (all jobs):
00:23:39.415     READ: bw=691MiB/s (724MB/s), 16.9MiB/s-140MiB/s (17.7MB/s-146MB/s), io=7076MiB (7419MB), run=10058-10242msec
00:23:39.415  
00:23:39.415  Disk stats (read/write):
00:23:39.415    nvme0n1: ios=6948/0, merge=0/0, ticks=1198689/0, in_queue=1198689, util=97.41%
00:23:39.415    nvme10n1: ios=1824/0, merge=0/0, ticks=1176110/0, in_queue=1176110, util=97.70%
00:23:39.415    nvme1n1: ios=1257/0, merge=0/0, ticks=1160784/0, in_queue=1160784, util=97.62%
00:23:39.415    nvme2n1: ios=9395/0, merge=0/0, ticks=1204983/0, in_queue=1204983, util=97.77%
00:23:39.415    nvme3n1: ios=9965/0, merge=0/0, ticks=1232654/0, in_queue=1232654, util=97.88%
00:23:39.415    nvme4n1: ios=1967/0, merge=0/0, ticks=1179712/0, in_queue=1179712, util=97.93%
00:23:39.415    nvme5n1: ios=2501/0, merge=0/0, ticks=1189726/0, in_queue=1189726, util=98.36%
00:23:39.415    nvme6n1: ios=1755/0, merge=0/0, ticks=1219684/0, in_queue=1219684, util=98.39%
00:23:39.415    nvme7n1: ios=6956/0, merge=0/0, ticks=1243989/0, in_queue=1243989, util=98.54%
00:23:39.415    nvme8n1: ios=1487/0, merge=0/0, ticks=1210206/0, in_queue=1210206, util=98.58%
00:23:39.415    nvme9n1: ios=11111/0, merge=0/0, ticks=1233285/0, in_queue=1233285, util=98.59%
00:23:39.415   19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10
00:23:39.415  [global]
00:23:39.415  thread=1
00:23:39.415  invalidate=1
00:23:39.415  rw=randwrite
00:23:39.415  time_based=1
00:23:39.415  runtime=10
00:23:39.415  ioengine=libaio
00:23:39.415  direct=1
00:23:39.415  bs=262144
00:23:39.415  iodepth=64
00:23:39.415  norandommap=1
00:23:39.415  numjobs=1
00:23:39.415  
00:23:39.415  [job0]
00:23:39.415  filename=/dev/nvme0n1
00:23:39.415  [job1]
00:23:39.415  filename=/dev/nvme10n1
00:23:39.415  [job2]
00:23:39.415  filename=/dev/nvme1n1
00:23:39.415  [job3]
00:23:39.415  filename=/dev/nvme2n1
00:23:39.415  [job4]
00:23:39.415  filename=/dev/nvme3n1
00:23:39.415  [job5]
00:23:39.415  filename=/dev/nvme4n1
00:23:39.415  [job6]
00:23:39.415  filename=/dev/nvme5n1
00:23:39.415  [job7]
00:23:39.415  filename=/dev/nvme6n1
00:23:39.415  [job8]
00:23:39.415  filename=/dev/nvme7n1
00:23:39.415  [job9]
00:23:39.415  filename=/dev/nvme8n1
00:23:39.415  [job10]
00:23:39.415  filename=/dev/nvme9n1
00:23:39.415  Could not set queue depth (nvme0n1)
00:23:39.415  Could not set queue depth (nvme10n1)
00:23:39.415  Could not set queue depth (nvme1n1)
00:23:39.415  Could not set queue depth (nvme2n1)
00:23:39.415  Could not set queue depth (nvme3n1)
00:23:39.415  Could not set queue depth (nvme4n1)
00:23:39.415  Could not set queue depth (nvme5n1)
00:23:39.415  Could not set queue depth (nvme6n1)
00:23:39.415  Could not set queue depth (nvme7n1)
00:23:39.415  Could not set queue depth (nvme8n1)
00:23:39.415  Could not set queue depth (nvme9n1)
00:23:39.415  job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:23:39.415  job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:23:39.415  job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:23:39.415  job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:23:39.415  job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:23:39.415  job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:23:39.415  job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:23:39.415  job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:23:39.415  job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:23:39.415  job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:23:39.415  job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64
00:23:39.415  fio-3.35
00:23:39.415  Starting 11 threads
00:23:49.394  
00:23:49.394  job0: (groupid=0, jobs=1): err= 0: pid=105581: Fri Dec 13 19:08:20 2024
00:23:49.394    write: IOPS=165, BW=41.5MiB/s (43.5MB/s)(427MiB/10282msec); 0 zone resets
00:23:49.394      slat (usec): min=16, max=40751, avg=5742.96, stdev=11983.13
00:23:49.394      clat (msec): min=18, max=762, avg=379.77, stdev=196.83
00:23:49.394       lat (msec): min=18, max=762, avg=385.51, stdev=199.68
00:23:49.394      clat percentiles (msec):
00:23:49.394       |  1.00th=[   75],  5.00th=[   87], 10.00th=[   92], 20.00th=[  116],
00:23:49.394       | 30.00th=[  150], 40.00th=[  472], 50.00th=[  502], 60.00th=[  510],
00:23:49.394       | 70.00th=[  527], 80.00th=[  542], 90.00th=[  550], 95.00th=[  558],
00:23:49.394       | 99.00th=[  642], 99.50th=[  701], 99.90th=[  760], 99.95th=[  760],
00:23:49.394       | 99.99th=[  760]
00:23:49.394     bw (  KiB/s): min=28614, max=139776, per=3.98%, avg=42019.90, stdev=32416.89, samples=20
00:23:49.394     iops        : min=  111, max=  546, avg=163.95, stdev=126.70, samples=20
00:23:49.394    lat (msec)   : 20=0.23%, 50=0.23%, 100=15.77%, 250=17.88%, 500=14.07%
00:23:49.394    lat (msec)   : 750=51.70%, 1000=0.12%
00:23:49.394    cpu          : usr=0.40%, sys=0.49%, ctx=2430, majf=0, minf=1
00:23:49.394    IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3%
00:23:49.394       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:23:49.394       complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:23:49.394       issued rwts: total=0,1706,0,0 short=0,0,0,0 dropped=0,0,0,0
00:23:49.394       latency   : target=0, window=0, percentile=100.00%, depth=64
00:23:49.394  job1: (groupid=0, jobs=1): err= 0: pid=105582: Fri Dec 13 19:08:20 2024
00:23:49.394    write: IOPS=120, BW=30.0MiB/s (31.5MB/s)(309MiB/10291msec); 0 zone resets
00:23:49.394      slat (usec): min=29, max=63566, avg=8110.19, stdev=14495.82
00:23:49.394      clat (msec): min=39, max=776, avg=524.65, stdev=82.08
00:23:49.394       lat (msec): min=39, max=777, avg=532.76, stdev=82.12
00:23:49.394      clat percentiles (msec):
00:23:49.394       |  1.00th=[  146],  5.00th=[  460], 10.00th=[  477], 20.00th=[  502],
00:23:49.394       | 30.00th=[  506], 40.00th=[  514], 50.00th=[  531], 60.00th=[  542],
00:23:49.394       | 70.00th=[  542], 80.00th=[  558], 90.00th=[  567], 95.00th=[  676],
00:23:49.394       | 99.00th=[  735], 99.50th=[  743], 99.90th=[  776], 99.95th=[  776],
00:23:49.394       | 99.99th=[  776]
00:23:49.394     bw (  KiB/s): min=22016, max=32833, per=2.84%, avg=29965.30, stdev=2757.43, samples=20
00:23:49.394     iops        : min=   86, max=  128, avg=116.80, stdev=10.81, samples=20
00:23:49.394    lat (msec)   : 50=0.24%, 100=0.32%, 250=1.30%, 500=17.33%, 750=80.65%
00:23:49.394    lat (msec)   : 1000=0.16%
00:23:49.394    cpu          : usr=0.42%, sys=0.44%, ctx=1912, majf=0, minf=1
00:23:49.394    IO depths    : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.6%, >=64=94.9%
00:23:49.394       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:23:49.394       complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:23:49.394       issued rwts: total=0,1235,0,0 short=0,0,0,0 dropped=0,0,0,0
00:23:49.394       latency   : target=0, window=0, percentile=100.00%, depth=64
00:23:49.394  job2: (groupid=0, jobs=1): err= 0: pid=105594: Fri Dec 13 19:08:20 2024
00:23:49.394    write: IOPS=113, BW=28.3MiB/s (29.7MB/s)(291MiB/10286msec); 0 zone resets
00:23:49.394      slat (usec): min=24, max=97110, avg=8600.74, stdev=15951.29
00:23:49.394      clat (msec): min=89, max=821, avg=556.56, stdev=79.46
00:23:49.394       lat (msec): min=89, max=821, avg=565.16, stdev=79.25
00:23:49.394      clat percentiles (msec):
00:23:49.394       |  1.00th=[  232],  5.00th=[  468], 10.00th=[  502], 20.00th=[  527],
00:23:49.394       | 30.00th=[  531], 40.00th=[  542], 50.00th=[  558], 60.00th=[  567],
00:23:49.394       | 70.00th=[  584], 80.00th=[  609], 90.00th=[  634], 95.00th=[  684],
00:23:49.394       | 99.00th=[  776], 99.50th=[  776], 99.90th=[  818], 99.95th=[  818],
00:23:49.394       | 99.99th=[  818]
00:23:49.394     bw (  KiB/s): min=18432, max=32768, per=2.67%, avg=28146.25, stdev=3512.03, samples=20
00:23:49.394     iops        : min=   72, max=  128, avg=109.70, stdev=13.76, samples=20
00:23:49.394    lat (msec)   : 100=0.34%, 250=0.69%, 500=9.62%, 750=88.32%, 1000=1.03%
00:23:49.394    cpu          : usr=0.33%, sys=0.38%, ctx=853, majf=0, minf=1
00:23:49.394    IO depths    : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.7%, >=64=94.6%
00:23:49.394       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:23:49.394       complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:23:49.394       issued rwts: total=0,1164,0,0 short=0,0,0,0 dropped=0,0,0,0
00:23:49.394       latency   : target=0, window=0, percentile=100.00%, depth=64
00:23:49.394  job3: (groupid=0, jobs=1): err= 0: pid=105595: Fri Dec 13 19:08:20 2024
00:23:49.394    write: IOPS=172, BW=43.1MiB/s (45.2MB/s)(444MiB/10280msec); 0 zone resets
00:23:49.394      slat (usec): min=17, max=71683, avg=5487.23, stdev=11850.18
00:23:49.394      clat (msec): min=4, max=764, avg=365.12, stdev=206.73
00:23:49.394       lat (msec): min=4, max=764, avg=370.60, stdev=209.79
00:23:49.394      clat percentiles (msec):
00:23:49.394       |  1.00th=[   20],  5.00th=[   59], 10.00th=[   65], 20.00th=[  101],
00:23:49.394       | 30.00th=[  163], 40.00th=[  447], 50.00th=[  502], 60.00th=[  510],
00:23:49.394       | 70.00th=[  527], 80.00th=[  542], 90.00th=[  558], 95.00th=[  567],
00:23:49.394       | 99.00th=[  642], 99.50th=[  709], 99.90th=[  768], 99.95th=[  768],
00:23:49.394       | 99.99th=[  768]
00:23:49.394     bw (  KiB/s): min=28614, max=205312, per=4.15%, avg=43767.05, stdev=41156.70, samples=20
00:23:49.394     iops        : min=  111, max=  802, avg=170.80, stdev=160.83, samples=20
00:23:49.394    lat (msec)   : 10=0.23%, 20=0.90%, 50=2.65%, 100=16.12%, 250=16.85%
00:23:49.394    lat (msec)   : 500=13.59%, 750=49.55%, 1000=0.11%
00:23:49.394    cpu          : usr=0.48%, sys=0.68%, ctx=2189, majf=0, minf=1
00:23:49.394    IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4%
00:23:49.394       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:23:49.394       complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:23:49.394       issued rwts: total=0,1774,0,0 short=0,0,0,0 dropped=0,0,0,0
00:23:49.394       latency   : target=0, window=0, percentile=100.00%, depth=64
00:23:49.394  job4: (groupid=0, jobs=1): err= 0: pid=105596: Fri Dec 13 19:08:20 2024
00:23:49.394    write: IOPS=114, BW=28.7MiB/s (30.1MB/s)(295MiB/10286msec); 0 zone resets
00:23:49.395      slat (usec): min=25, max=138085, avg=8473.63, stdev=15919.21
00:23:49.395      clat (msec): min=47, max=797, avg=548.95, stdev=96.99
00:23:49.395       lat (msec): min=47, max=797, avg=557.42, stdev=97.30
00:23:49.395      clat percentiles (msec):
00:23:49.395       |  1.00th=[  122],  5.00th=[  443], 10.00th=[  493], 20.00th=[  506],
00:23:49.395       | 30.00th=[  523], 40.00th=[  531], 50.00th=[  542], 60.00th=[  550],
00:23:49.395       | 70.00th=[  575], 80.00th=[  617], 90.00th=[  634], 95.00th=[  726],
00:23:49.395       | 99.00th=[  793], 99.50th=[  793], 99.90th=[  802], 99.95th=[  802],
00:23:49.395       | 99.99th=[  802]
00:23:49.395     bw (  KiB/s): min=18468, max=32768, per=2.71%, avg=28583.30, stdev=3685.05, samples=20
00:23:49.395     iops        : min=   72, max=  128, avg=111.40, stdev=14.49, samples=20
00:23:49.395    lat (msec)   : 50=0.34%, 100=0.34%, 250=1.36%, 500=11.78%, 750=82.03%
00:23:49.395    lat (msec)   : 1000=4.15%
00:23:49.395    cpu          : usr=0.39%, sys=0.33%, ctx=1093, majf=0, minf=1
00:23:49.395    IO depths    : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.7%, >=64=94.7%
00:23:49.395       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:23:49.395       complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:23:49.395       issued rwts: total=0,1180,0,0 short=0,0,0,0 dropped=0,0,0,0
00:23:49.395       latency   : target=0, window=0, percentile=100.00%, depth=64
00:23:49.395  job5: (groupid=0, jobs=1): err= 0: pid=105597: Fri Dec 13 19:08:20 2024
00:23:49.395    write: IOPS=427, BW=107MiB/s (112MB/s)(1080MiB/10114msec); 0 zone resets
00:23:49.395      slat (usec): min=22, max=100206, avg=2281.14, stdev=4226.44
00:23:49.395      clat (msec): min=7, max=259, avg=147.48, stdev=23.01
00:23:49.395       lat (msec): min=7, max=261, avg=149.76, stdev=22.90
00:23:49.395      clat percentiles (msec):
00:23:49.395       |  1.00th=[   58],  5.00th=[  127], 10.00th=[  132], 20.00th=[  136],
00:23:49.395       | 30.00th=[  140], 40.00th=[  144], 50.00th=[  148], 60.00th=[  150],
00:23:49.395       | 70.00th=[  153], 80.00th=[  157], 90.00th=[  169], 95.00th=[  184],
00:23:49.395       | 99.00th=[  230], 99.50th=[  249], 99.90th=[  257], 99.95th=[  259],
00:23:49.395       | 99.99th=[  259]
00:23:49.395     bw (  KiB/s): min=87888, max=122634, per=10.32%, avg=108904.65, stdev=8072.46, samples=20
00:23:49.395     iops        : min=  343, max=  479, avg=425.35, stdev=31.59, samples=20
00:23:49.395    lat (msec)   : 10=0.21%, 20=0.19%, 50=0.46%, 100=1.27%, 250=97.52%
00:23:49.395    lat (msec)   : 500=0.35%
00:23:49.395    cpu          : usr=1.27%, sys=1.29%, ctx=3097, majf=0, minf=1
00:23:49.395    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5%
00:23:49.395       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:23:49.395       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:23:49.395       issued rwts: total=0,4320,0,0 short=0,0,0,0 dropped=0,0,0,0
00:23:49.395       latency   : target=0, window=0, percentile=100.00%, depth=64
00:23:49.395  job6: (groupid=0, jobs=1): err= 0: pid=105598: Fri Dec 13 19:08:20 2024
00:23:49.395    write: IOPS=126, BW=31.5MiB/s (33.1MB/s)(324MiB/10279msec); 0 zone resets
00:23:49.395      slat (usec): min=23, max=152542, avg=7590.80, stdev=14595.84
00:23:49.395      clat (usec): min=908, max=822965, avg=499250.59, stdev=139685.43
00:23:49.395       lat (usec): min=947, max=823068, avg=506841.39, stdev=140963.75
00:23:49.395      clat percentiles (usec):
00:23:49.395       |  1.00th=[  1123],  5.00th=[ 78119], 10.00th=[463471], 20.00th=[497026],
00:23:49.395       | 30.00th=[505414], 40.00th=[509608], 50.00th=[526386], 60.00th=[541066],
00:23:49.395       | 70.00th=[541066], 80.00th=[557843], 90.00th=[566232], 95.00th=[658506],
00:23:49.395       | 99.00th=[775947], 99.50th=[792724], 99.90th=[817890], 99.95th=[826278],
00:23:49.395       | 99.99th=[826278]
00:23:49.395     bw (  KiB/s): min=22528, max=55808, per=2.99%, avg=31572.00, stdev=6174.78, samples=20
00:23:49.395     iops        : min=   88, max=  218, avg=123.10, stdev=24.17, samples=20
00:23:49.395    lat (usec)   : 1000=0.23%
00:23:49.395    lat (msec)   : 2=1.93%, 4=1.77%, 10=0.31%, 20=0.31%, 50=0.39%
00:23:49.395    lat (msec)   : 100=0.46%, 250=1.46%, 500=16.50%, 750=75.17%, 1000=1.46%
00:23:49.395    cpu          : usr=0.37%, sys=0.46%, ctx=1711, majf=0, minf=1
00:23:49.395    IO depths    : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1%
00:23:49.395       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:23:49.395       complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:23:49.395       issued rwts: total=0,1297,0,0 short=0,0,0,0 dropped=0,0,0,0
00:23:49.395       latency   : target=0, window=0, percentile=100.00%, depth=64
00:23:49.395  job7: (groupid=0, jobs=1): err= 0: pid=105599: Fri Dec 13 19:08:20 2024
00:23:49.395    write: IOPS=119, BW=30.0MiB/s (31.4MB/s)(308MiB/10286msec); 0 zone resets
00:23:49.395      slat (usec): min=31, max=58266, avg=8111.56, stdev=14516.02
00:23:49.395      clat (msec): min=25, max=785, avg=525.40, stdev=82.39
00:23:49.395       lat (msec): min=25, max=785, avg=533.51, stdev=82.49
00:23:49.395      clat percentiles (msec):
00:23:49.395       |  1.00th=[  130],  5.00th=[  439], 10.00th=[  477], 20.00th=[  502],
00:23:49.395       | 30.00th=[  506], 40.00th=[  514], 50.00th=[  531], 60.00th=[  542],
00:23:49.395       | 70.00th=[  550], 80.00th=[  558], 90.00th=[  575], 95.00th=[  667],
00:23:49.395       | 99.00th=[  726], 99.50th=[  735], 99.90th=[  785], 99.95th=[  785],
00:23:49.395       | 99.99th=[  785]
00:23:49.395     bw (  KiB/s): min=22573, max=34816, per=2.84%, avg=29941.20, stdev=2932.09, samples=20
00:23:49.395     iops        : min=   88, max=  136, avg=116.70, stdev=11.47, samples=20
00:23:49.395    lat (msec)   : 50=0.08%, 100=0.65%, 250=0.97%, 500=17.60%, 750=80.21%
00:23:49.395    lat (msec)   : 1000=0.49%
00:23:49.395    cpu          : usr=0.36%, sys=0.41%, ctx=1533, majf=0, minf=1
00:23:49.395    IO depths    : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.6%, >=64=94.9%
00:23:49.395       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:23:49.395       complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:23:49.395       issued rwts: total=0,1233,0,0 short=0,0,0,0 dropped=0,0,0,0
00:23:49.395       latency   : target=0, window=0, percentile=100.00%, depth=64
00:23:49.395  job8: (groupid=0, jobs=1): err= 0: pid=105600: Fri Dec 13 19:08:20 2024
00:23:49.395    write: IOPS=1332, BW=333MiB/s (349MB/s)(3345MiB/10044msec); 0 zone resets
00:23:49.395      slat (usec): min=14, max=39439, avg=675.61, stdev=1322.72
00:23:49.395      clat (msec): min=12, max=979, avg=47.34, stdev=50.54
00:23:49.395       lat (msec): min=12, max=979, avg=48.02, stdev=50.69
00:23:49.395      clat percentiles (msec):
00:23:49.395       |  1.00th=[   24],  5.00th=[   40], 10.00th=[   40], 20.00th=[   41],
00:23:49.395       | 30.00th=[   42], 40.00th=[   43], 50.00th=[   43], 60.00th=[   44],
00:23:49.395       | 70.00th=[   46], 80.00th=[   49], 90.00th=[   51], 95.00th=[   52],
00:23:49.395       | 99.00th=[   90], 99.50th=[  150], 99.90th=[  944], 99.95th=[  961],
00:23:49.395       | 99.99th=[  969]
00:23:49.395     bw (  KiB/s): min=87214, max=393728, per=32.28%, avg=340717.75, stdev=81196.24, samples=20
00:23:49.395     iops        : min=  340, max= 1538, avg=1330.65, stdev=317.30, samples=20
00:23:49.395    lat (msec)   : 20=0.41%, 50=87.93%, 100=10.96%, 250=0.29%, 500=0.01%
00:23:49.395    lat (msec)   : 750=0.10%, 1000=0.28%
00:23:49.395    cpu          : usr=1.74%, sys=2.68%, ctx=14372, majf=0, minf=1
00:23:49.395    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5%
00:23:49.395       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:23:49.395       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:23:49.395       issued rwts: total=0,13380,0,0 short=0,0,0,0 dropped=0,0,0,0
00:23:49.395       latency   : target=0, window=0, percentile=100.00%, depth=64
00:23:49.395  job9: (groupid=0, jobs=1): err= 0: pid=105602: Fri Dec 13 19:08:20 2024
00:23:49.395    write: IOPS=434, BW=109MiB/s (114MB/s)(1100MiB/10112msec); 0 zone resets
00:23:49.395      slat (usec): min=18, max=28519, avg=2105.36, stdev=3840.98
00:23:49.395      clat (usec): min=538, max=719967, avg=144985.80, stdev=51038.43
00:23:49.395       lat (usec): min=593, max=724343, avg=147091.16, stdev=51127.89
00:23:49.395      clat percentiles (usec):
00:23:49.395       |  1.00th=[  1745],  5.00th=[ 77071], 10.00th=[127402], 20.00th=[135267],
00:23:49.395       | 30.00th=[139461], 40.00th=[141558], 50.00th=[145753], 60.00th=[147850],
00:23:49.395       | 70.00th=[149947], 80.00th=[154141], 90.00th=[164627], 95.00th=[181404],
00:23:49.395       | 99.00th=[287310], 99.50th=[526386], 99.90th=[692061], 99.95th=[708838],
00:23:49.395       | 99.99th=[717226]
00:23:49.395     bw (  KiB/s): min=98107, max=139264, per=10.51%, avg=110909.80, stdev=9591.23, samples=20
00:23:49.395     iops        : min=  383, max=  544, avg=433.15, stdev=37.55, samples=20
00:23:49.395    lat (usec)   : 750=0.34%, 1000=0.09%
00:23:49.395    lat (msec)   : 2=0.84%, 4=0.93%, 10=0.48%, 20=0.18%, 50=0.30%
00:23:49.395    lat (msec)   : 100=2.82%, 250=92.88%, 500=0.59%, 750=0.55%
00:23:49.395    cpu          : usr=1.33%, sys=1.48%, ctx=5248, majf=0, minf=2
00:23:49.395    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6%
00:23:49.395       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:23:49.395       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:23:49.395       issued rwts: total=0,4398,0,0 short=0,0,0,0 dropped=0,0,0,0
00:23:49.395       latency   : target=0, window=0, percentile=100.00%, depth=64
00:23:49.395  job10: (groupid=0, jobs=1): err= 0: pid=105603: Fri Dec 13 19:08:20 2024
00:23:49.395    write: IOPS=1069, BW=267MiB/s (280MB/s)(2687MiB/10050msec); 0 zone resets
00:23:49.395      slat (usec): min=14, max=12839, avg=845.80, stdev=1479.07
00:23:49.395      clat (msec): min=2, max=755, avg=58.98, stdev=37.37
00:23:49.395       lat (msec): min=2, max=755, avg=59.82, stdev=37.30
00:23:49.395      clat percentiles (msec):
00:23:49.395       |  1.00th=[   49],  5.00th=[   51], 10.00th=[   52], 20.00th=[   53],
00:23:49.395       | 30.00th=[   54], 40.00th=[   55], 50.00th=[   55], 60.00th=[   56],
00:23:49.395       | 70.00th=[   57], 80.00th=[   58], 90.00th=[   60], 95.00th=[   62],
00:23:49.395       | 99.00th=[  163], 99.50th=[  188], 99.90th=[  684], 99.95th=[  718],
00:23:49.395       | 99.99th=[  751]
00:23:49.395     bw (  KiB/s): min=107008, max=308736, per=25.89%, avg=273308.70, stdev=57802.74, samples=20
00:23:49.395     iops        : min=  418, max= 1206, avg=1067.40, stdev=225.72, samples=20
00:23:49.395    lat (msec)   : 4=0.01%, 10=0.09%, 20=0.22%, 50=3.56%, 100=93.63%
00:23:49.395    lat (msec)   : 250=2.02%, 500=0.18%, 750=0.28%, 1000=0.01%
00:23:49.395    cpu          : usr=1.41%, sys=2.25%, ctx=18083, majf=0, minf=1
00:23:49.395    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4%
00:23:49.395       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:23:49.395       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
00:23:49.395       issued rwts: total=0,10747,0,0 short=0,0,0,0 dropped=0,0,0,0
00:23:49.395       latency   : target=0, window=0, percentile=100.00%, depth=64
00:23:49.395  
00:23:49.395  Run status group 0 (all jobs):
00:23:49.395    WRITE: bw=1031MiB/s (1081MB/s), 28.3MiB/s-333MiB/s (29.7MB/s-349MB/s), io=10.4GiB (11.1GB), run=10044-10291msec
00:23:49.395  
00:23:49.395  Disk stats (read/write):
00:23:49.395    nvme0n1: ios=49/3353, merge=0/0, ticks=39/1226615, in_queue=1226654, util=97.38%
00:23:49.395    nvme10n1: ios=49/2416, merge=0/0, ticks=69/1226810, in_queue=1226879, util=97.83%
00:23:49.395    nvme1n1: ios=25/2274, merge=0/0, ticks=32/1225493, in_queue=1225525, util=97.68%
00:23:49.395    nvme2n1: ios=0/3490, merge=0/0, ticks=0/1226496, in_queue=1226496, util=97.68%
00:23:49.395    nvme3n1: ios=0/2309, merge=0/0, ticks=0/1225193, in_queue=1225193, util=97.89%
00:23:49.396    nvme4n1: ios=0/8472, merge=0/0, ticks=0/1208105, in_queue=1208105, util=98.17%
00:23:49.396    nvme5n1: ios=0/2543, merge=0/0, ticks=0/1226767, in_queue=1226767, util=98.22%
00:23:49.396    nvme6n1: ios=0/2415, merge=0/0, ticks=0/1227030, in_queue=1227030, util=98.36%
00:23:49.396    nvme7n1: ios=0/26506, merge=0/0, ticks=0/1216815, in_queue=1216815, util=98.60%
00:23:49.396    nvme8n1: ios=0/8628, merge=0/0, ticks=0/1209968, in_queue=1209968, util=98.76%
00:23:49.396    nvme9n1: ios=0/21255, merge=0/0, ticks=0/1216265, in_queue=1216265, util=98.80%
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync
00:23:49.396    19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:23:49.396  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2
00:23:49.396  NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s)
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3
00:23:49.396  NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s)
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4
00:23:49.396  NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s)
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:49.396   19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5
00:23:49.396  NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s)
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6
00:23:49.396  NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s)
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:49.396   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7
00:23:49.656  NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s)
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8
00:23:49.656  NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s)
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:49.656   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9
00:23:49.915  NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s)
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10
00:23:49.915  NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s)
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS)
00:23:49.915   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11
00:23:50.175  NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s)
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20}
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:23:50.175  rmmod nvme_tcp
00:23:50.175  rmmod nvme_fabrics
00:23:50.175  rmmod nvme_keyring
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 104909 ']'
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 104909
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 104909 ']'
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 104909
00:23:50.175    19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:50.175    19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104909
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:23:50.175  killing process with pid 104909
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104909'
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 104909
00:23:50.175   19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 104909
00:23:50.743   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:23:50.743   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:23:50.743   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:23:50.743   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr
00:23:50.743   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save
00:23:50.743   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:23:50.743   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore
00:23:50.743   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:23:50.743   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:23:50.743   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:23:50.743   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:23:50.743   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:23:50.743   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:23:50.743   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:23:50.743   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:23:50.743   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:23:50.743   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:23:50.743   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:23:50.743   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:23:50.743   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:23:50.743   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:23:51.002   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:23:51.002   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns
00:23:51.002   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:23:51.002   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:23:51.002    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:23:51.002   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0
00:23:51.002  ************************************
00:23:51.002  END TEST nvmf_multiconnection
00:23:51.002  ************************************
00:23:51.002  
00:23:51.002  real	0m50.140s
00:23:51.002  user	2m56.483s
00:23:51.002  sys	0m19.415s
00:23:51.002   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable
00:23:51.002   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x
00:23:51.002   19:08:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp
00:23:51.002   19:08:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:23:51.002   19:08:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:23:51.002   19:08:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:23:51.002  ************************************
00:23:51.002  START TEST nvmf_initiator_timeout
00:23:51.002  ************************************
00:23:51.002   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp
00:23:51.002  * Looking for test storage...
00:23:51.002  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:23:51.002    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:23:51.002     19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version
00:23:51.002     19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-:
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-:
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<'
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 ))
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:23:51.263     19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1
00:23:51.263     19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1
00:23:51.263     19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:23:51.263     19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1
00:23:51.263     19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2
00:23:51.263     19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2
00:23:51.263     19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:23:51.263     19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:23:51.263  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:51.263  		--rc genhtml_branch_coverage=1
00:23:51.263  		--rc genhtml_function_coverage=1
00:23:51.263  		--rc genhtml_legend=1
00:23:51.263  		--rc geninfo_all_blocks=1
00:23:51.263  		--rc geninfo_unexecuted_blocks=1
00:23:51.263  		
00:23:51.263  		'
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:23:51.263  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:51.263  		--rc genhtml_branch_coverage=1
00:23:51.263  		--rc genhtml_function_coverage=1
00:23:51.263  		--rc genhtml_legend=1
00:23:51.263  		--rc geninfo_all_blocks=1
00:23:51.263  		--rc geninfo_unexecuted_blocks=1
00:23:51.263  		
00:23:51.263  		'
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:23:51.263  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:51.263  		--rc genhtml_branch_coverage=1
00:23:51.263  		--rc genhtml_function_coverage=1
00:23:51.263  		--rc genhtml_legend=1
00:23:51.263  		--rc geninfo_all_blocks=1
00:23:51.263  		--rc geninfo_unexecuted_blocks=1
00:23:51.263  		
00:23:51.263  		'
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:23:51.263  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:51.263  		--rc genhtml_branch_coverage=1
00:23:51.263  		--rc genhtml_function_coverage=1
00:23:51.263  		--rc genhtml_legend=1
00:23:51.263  		--rc geninfo_all_blocks=1
00:23:51.263  		--rc geninfo_unexecuted_blocks=1
00:23:51.263  		
00:23:51.263  		'
00:23:51.263   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:23:51.263     19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:23:51.263     19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:23:51.263     19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob
00:23:51.263     19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:23:51.263     19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:23:51.263     19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:23:51.263      19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:51.263      19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:51.263      19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:51.263      19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH
00:23:51.263      19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:23:51.263  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:23:51.263    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0
00:23:51.263   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64
00:23:51.263   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:23:51.263   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:23:51.264    19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:23:51.264  Cannot find device "nvmf_init_br"
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:23:51.264  Cannot find device "nvmf_init_br2"
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:23:51.264  Cannot find device "nvmf_tgt_br"
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:23:51.264  Cannot find device "nvmf_tgt_br2"
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:23:51.264  Cannot find device "nvmf_init_br"
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:23:51.264  Cannot find device "nvmf_init_br2"
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:23:51.264  Cannot find device "nvmf_tgt_br"
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:23:51.264  Cannot find device "nvmf_tgt_br2"
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:23:51.264  Cannot find device "nvmf_br"
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true
00:23:51.264   19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:23:51.264  Cannot find device "nvmf_init_if"
00:23:51.264   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true
00:23:51.264   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:23:51.264  Cannot find device "nvmf_init_if2"
00:23:51.264   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true
00:23:51.264   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:23:51.264  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:23:51.264   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true
00:23:51.264   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:23:51.264  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:23:51.264   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true
00:23:51.264   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:23:51.264   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:23:51.264   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:23:51.264   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:23:51.264   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:23:51.264   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:23:51.264   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:23:51.264   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:23:51.264   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:23:51.264   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:23:51.524  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:23:51.524  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms
00:23:51.524  
00:23:51.524  --- 10.0.0.3 ping statistics ---
00:23:51.524  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:23:51.524  rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:23:51.524  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:23:51.524  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms
00:23:51.524  
00:23:51.524  --- 10.0.0.4 ping statistics ---
00:23:51.524  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:23:51.524  rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:23:51.524  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:23:51.524  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms
00:23:51.524  
00:23:51.524  --- 10.0.0.1 ping statistics ---
00:23:51.524  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:23:51.524  rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:23:51.524  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:23:51.524  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms
00:23:51.524  
00:23:51.524  --- 10.0.0.2 ping statistics ---
00:23:51.524  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:23:51.524  rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@461 -- # return 0
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=106028
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 106028
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 106028 ']'
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:51.524  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:51.524   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:23:51.524  [2024-12-13 19:08:23.339117] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:23:51.524  [2024-12-13 19:08:23.339200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:23:51.783  [2024-12-13 19:08:23.487683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:23:51.783  [2024-12-13 19:08:23.520208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:23:51.783  [2024-12-13 19:08:23.520290] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:23:51.783  [2024-12-13 19:08:23.520300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:23:51.783  [2024-12-13 19:08:23.520307] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:23:51.783  [2024-12-13 19:08:23.520315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:23:51.783  [2024-12-13 19:08:23.521581] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:23:51.783  [2024-12-13 19:08:23.521677] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:23:51.783  [2024-12-13 19:08:23.521796] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:23:51.783  [2024-12-13 19:08:23.521800] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:23:52.042   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:52.042   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0
00:23:52.042   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:23:52.042   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable
00:23:52.042   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:23:52.042   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:23:52.042   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT
00:23:52.042   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:23:52.042   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:52.042   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:23:52.042  Malloc0
00:23:52.042   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:52.042   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30
00:23:52.042   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:52.042   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:23:52.042  Delay0
00:23:52.042   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:52.042   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:23:52.042   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:52.042   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:23:52.042  [2024-12-13 19:08:23.760667] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:23:52.042   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:52.042   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:23:52.042   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:52.043   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:23:52.043   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:52.043   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:23:52.043   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:52.043   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:23:52.043   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:52.043   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:23:52.043   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:52.043   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:23:52.043  [2024-12-13 19:08:23.788912] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:23:52.043   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:52.043   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420
00:23:52.301   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME
00:23:52.301   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0
00:23:52.301   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:23:52.301   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:23:52.301   19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2
00:23:54.203   19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:23:54.203    19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:23:54.203    19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:23:54.203   19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:23:54.203   19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:23:54.203   19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0
00:23:54.203   19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=106097
00:23:54.203   19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v
00:23:54.203   19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3
00:23:54.203  [global]
00:23:54.203  thread=1
00:23:54.203  invalidate=1
00:23:54.203  rw=write
00:23:54.203  time_based=1
00:23:54.203  runtime=60
00:23:54.203  ioengine=libaio
00:23:54.203  direct=1
00:23:54.203  bs=4096
00:23:54.203  iodepth=1
00:23:54.203  norandommap=0
00:23:54.203  numjobs=1
00:23:54.203  
00:23:54.203  verify_dump=1
00:23:54.203  verify_backlog=512
00:23:54.203  verify_state_save=0
00:23:54.203  do_verify=1
00:23:54.203  verify=crc32c-intel
00:23:54.203  [job0]
00:23:54.203  filename=/dev/nvme0n1
00:23:54.462  Could not set queue depth (nvme0n1)
00:23:54.462  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:23:54.462  fio-3.35
00:23:54.462  Starting 1 thread
00:23:57.749   19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000
00:23:57.749   19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:57.749   19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:23:57.749  true
00:23:57.750   19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:57.750   19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000
00:23:57.750   19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:57.750   19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:23:57.750  true
00:23:57.750   19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:57.750   19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000
00:23:57.750   19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:57.750   19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:23:57.750  true
00:23:57.750   19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:57.750   19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000
00:23:57.750   19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:57.750   19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:23:57.750  true
00:23:57.750   19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:57.750   19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3
00:24:00.311   19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30
00:24:00.311   19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:00.311   19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:24:00.311  true
00:24:00.311   19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:00.311   19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30
00:24:00.311   19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:00.311   19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:24:00.311  true
00:24:00.311   19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:00.311   19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30
00:24:00.311   19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:00.311   19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:24:00.311  true
00:24:00.311   19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:00.311   19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30
00:24:00.311   19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:00.311   19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:24:00.311  true
00:24:00.311   19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:00.311   19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0
00:24:00.311   19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 106097
00:24:56.539  
00:24:56.539  job0: (groupid=0, jobs=1): err= 0: pid=106118: Fri Dec 13 19:09:26 2024
00:24:56.539    read: IOPS=847, BW=3390KiB/s (3472kB/s)(199MiB/60000msec)
00:24:56.539      slat (usec): min=11, max=13169, avg=16.64, stdev=78.33
00:24:56.539      clat (usec): min=148, max=40844k, avg=989.19, stdev=181116.77
00:24:56.539       lat (usec): min=163, max=40844k, avg=1005.82, stdev=181116.84
00:24:56.539      clat percentiles (usec):
00:24:56.539       |  1.00th=[  159],  5.00th=[  163], 10.00th=[  165], 20.00th=[  169],
00:24:56.539       | 30.00th=[  174], 40.00th=[  178], 50.00th=[  182], 60.00th=[  188],
00:24:56.539       | 70.00th=[  192], 80.00th=[  200], 90.00th=[  210], 95.00th=[  223],
00:24:56.539       | 99.00th=[  251], 99.50th=[  273], 99.90th=[  457], 99.95th=[  578],
00:24:56.539       | 99.99th=[ 1090]
00:24:56.539    write: IOPS=853, BW=3413KiB/s (3495kB/s)(200MiB/60000msec); 0 zone resets
00:24:56.539      slat (usec): min=14, max=596, avg=23.00, stdev= 8.80
00:24:56.539      clat (usec): min=104, max=2150, avg=146.80, stdev=26.99
00:24:56.539       lat (usec): min=137, max=2170, avg=169.80, stdev=28.86
00:24:56.539      clat percentiles (usec):
00:24:56.539       |  1.00th=[  125],  5.00th=[  128], 10.00th=[  130], 20.00th=[  133],
00:24:56.539       | 30.00th=[  135], 40.00th=[  139], 50.00th=[  143], 60.00th=[  147],
00:24:56.539       | 70.00th=[  153], 80.00th=[  159], 90.00th=[  169], 95.00th=[  178],
00:24:56.539       | 99.00th=[  204], 99.50th=[  221], 99.90th=[  404], 99.95th=[  545],
00:24:56.539       | 99.99th=[ 1057]
00:24:56.539     bw (  KiB/s): min= 4096, max=12288, per=100.00%, avg=10500.00, stdev=1606.95, samples=38
00:24:56.539     iops        : min= 1024, max= 3072, avg=2625.00, stdev=401.74, samples=38
00:24:56.539    lat (usec)   : 250=99.32%, 500=0.62%, 750=0.03%, 1000=0.01%
00:24:56.539    lat (msec)   : 2=0.01%, 4=0.01%, >=2000=0.01%
00:24:56.539    cpu          : usr=0.60%, sys=2.40%, ctx=102064, majf=0, minf=5
00:24:56.539    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:24:56.539       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:24:56.539       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:24:56.539       issued rwts: total=50855,51200,0,0 short=0,0,0,0 dropped=0,0,0,0
00:24:56.539       latency   : target=0, window=0, percentile=100.00%, depth=1
00:24:56.539  
00:24:56.539  Run status group 0 (all jobs):
00:24:56.539     READ: bw=3390KiB/s (3472kB/s), 3390KiB/s-3390KiB/s (3472kB/s-3472kB/s), io=199MiB (208MB), run=60000-60000msec
00:24:56.539    WRITE: bw=3413KiB/s (3495kB/s), 3413KiB/s-3413KiB/s (3495kB/s-3495kB/s), io=200MiB (210MB), run=60000-60000msec
00:24:56.539  
00:24:56.539  Disk stats (read/write):
00:24:56.539    nvme0n1: ios=51033/50776, merge=0/0, ticks=9752/8019, in_queue=17771, util=99.61%
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:24:56.539  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']'
00:24:56.539  nvmf hotplug test: fio successful as expected
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected'
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20}
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:24:56.539  rmmod nvme_tcp
00:24:56.539  rmmod nvme_fabrics
00:24:56.539  rmmod nvme_keyring
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 106028 ']'
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 106028
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 106028 ']'
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 106028
00:24:56.539    19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:24:56.539    19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106028
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:24:56.539   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:24:56.539  killing process with pid 106028
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106028'
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 106028
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 106028
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:24:56.540   19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:24:56.540   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns
00:24:56.540   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:24:56.540   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:24:56.540   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0
00:24:56.540  
00:24:56.540  real	1m4.371s
00:24:56.540  user	4m3.589s
00:24:56.540  sys	0m9.418s
00:24:56.540   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable
00:24:56.540   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x
00:24:56.540  ************************************
00:24:56.540  END TEST nvmf_initiator_timeout
00:24:56.540  ************************************
00:24:56.540   19:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]]
00:24:56.540   19:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp
00:24:56.540   19:09:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:24:56.540   19:09:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:24:56.540   19:09:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:24:56.540  ************************************
00:24:56.540  START TEST nvmf_nsid
00:24:56.540  ************************************
00:24:56.540   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp
00:24:56.540  * Looking for test storage...
00:24:56.540  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:24:56.540     19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version
00:24:56.540     19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-:
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-:
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<'
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 ))
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:24:56.540     19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1
00:24:56.540     19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1
00:24:56.540     19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:24:56.540     19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1
00:24:56.540     19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2
00:24:56.540     19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2
00:24:56.540     19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:24:56.540     19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:24:56.540  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:56.540  		--rc genhtml_branch_coverage=1
00:24:56.540  		--rc genhtml_function_coverage=1
00:24:56.540  		--rc genhtml_legend=1
00:24:56.540  		--rc geninfo_all_blocks=1
00:24:56.540  		--rc geninfo_unexecuted_blocks=1
00:24:56.540  		
00:24:56.540  		'
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:24:56.540  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:56.540  		--rc genhtml_branch_coverage=1
00:24:56.540  		--rc genhtml_function_coverage=1
00:24:56.540  		--rc genhtml_legend=1
00:24:56.540  		--rc geninfo_all_blocks=1
00:24:56.540  		--rc geninfo_unexecuted_blocks=1
00:24:56.540  		
00:24:56.540  		'
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:24:56.540  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:56.540  		--rc genhtml_branch_coverage=1
00:24:56.540  		--rc genhtml_function_coverage=1
00:24:56.540  		--rc genhtml_legend=1
00:24:56.540  		--rc geninfo_all_blocks=1
00:24:56.540  		--rc geninfo_unexecuted_blocks=1
00:24:56.540  		
00:24:56.540  		'
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:24:56.540  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:56.540  		--rc genhtml_branch_coverage=1
00:24:56.540  		--rc genhtml_function_coverage=1
00:24:56.540  		--rc genhtml_legend=1
00:24:56.540  		--rc geninfo_all_blocks=1
00:24:56.540  		--rc geninfo_unexecuted_blocks=1
00:24:56.540  		
00:24:56.540  		'
00:24:56.540   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:24:56.540     19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:24:56.540    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:24:56.541     19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:24:56.541    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:24:56.541    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:24:56.541    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:24:56.541    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:24:56.541    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:24:56.541    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:24:56.541    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:24:56.541     19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob
00:24:56.541     19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:24:56.541     19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:24:56.541     19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:24:56.541      19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:56.541      19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:56.541      19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:56.541      19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH
00:24:56.541      19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:56.541    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0
00:24:56.541    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:24:56.541    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:24:56.541    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:24:56.541    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:24:56.541    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:24:56.541    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:24:56.541  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:24:56.541    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:24:56.541    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:24:56.541    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid=
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:24:56.541    19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:24:56.541  Cannot find device "nvmf_init_br"
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:24:56.541  Cannot find device "nvmf_init_br2"
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:24:56.541  Cannot find device "nvmf_tgt_br"
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:24:56.541  Cannot find device "nvmf_tgt_br2"
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:24:56.541  Cannot find device "nvmf_init_br"
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:24:56.541  Cannot find device "nvmf_init_br2"
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:24:56.541  Cannot find device "nvmf_tgt_br"
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:24:56.541  Cannot find device "nvmf_tgt_br2"
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:24:56.541  Cannot find device "nvmf_br"
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:24:56.541  Cannot find device "nvmf_init_if"
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:24:56.541  Cannot find device "nvmf_init_if2"
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:24:56.541  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true
00:24:56.541   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:24:56.542  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:24:56.542  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:24:56.542  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms
00:24:56.542  
00:24:56.542  --- 10.0.0.3 ping statistics ---
00:24:56.542  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:24:56.542  rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:24:56.542  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:24:56.542  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms
00:24:56.542  
00:24:56.542  --- 10.0.0.4 ping statistics ---
00:24:56.542  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:24:56.542  rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:24:56.542  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:24:56.542  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms
00:24:56.542  
00:24:56.542  --- 10.0.0.1 ping statistics ---
00:24:56.542  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:24:56.542  rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:24:56.542  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:24:56.542  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms
00:24:56.542  
00:24:56.542  --- 10.0.0.2 ping statistics ---
00:24:56.542  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:24:56.542  rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=106973
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 106973
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 106973 ']'
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100
00:24:56.542  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable
00:24:56.542   19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:24:56.542  [2024-12-13 19:09:27.762266] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:24:56.542  [2024-12-13 19:09:27.762357] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:24:56.542  [2024-12-13 19:09:27.918093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:56.542  [2024-12-13 19:09:27.955408] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:24:56.542  [2024-12-13 19:09:27.955484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:24:56.542  [2024-12-13 19:09:27.955499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:24:56.542  [2024-12-13 19:09:27.955510] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:24:56.542  [2024-12-13 19:09:27.955519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:24:56.542  [2024-12-13 19:09:27.955966] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:24:56.542   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:24:56.542   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0
00:24:56.542   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:24:56.542   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable
00:24:56.542   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:24:56.542   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:24:56.542   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT
00:24:56.542   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=107002
00:24:56.542   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3
00:24:56.542   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock
00:24:56.542    19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip
00:24:56.543    19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip
00:24:56.543    19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=()
00:24:56.543    19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates
00:24:56.543    19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:24:56.543    19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:24:56.543    19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:24:56.543    19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:24:56.543    19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:24:56.543    19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:24:56.543    19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:24:56.543   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1
00:24:56.543    19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen
00:24:56.543   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=c58ccb8f-7393-489c-ba54-065150109f78
00:24:56.543    19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen
00:24:56.543   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=16bbde46-119b-496c-9ad1-7c463d7d1b9d
00:24:56.543    19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen
00:24:56.543   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=4a52962b-57ee-45c4-ada8-e21ad510f808
00:24:56.543   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd
00:24:56.543   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:56.543   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:24:56.543  null0
00:24:56.543  null1
00:24:56.543  null2
00:24:56.543  [2024-12-13 19:09:28.187745] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:24:56.543  [2024-12-13 19:09:28.203275] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:24:56.543  [2024-12-13 19:09:28.203357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107002 ]
00:24:56.543  [2024-12-13 19:09:28.211890] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:24:56.543  Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...
00:24:56.543   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:56.543   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 107002 /var/tmp/tgt2.sock
00:24:56.543   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 107002 ']'
00:24:56.543   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock
00:24:56.543   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100
00:24:56.543   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...'
00:24:56.543   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable
00:24:56.543   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:24:56.543  [2024-12-13 19:09:28.355960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:56.802  [2024-12-13 19:09:28.409546] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:24:57.060   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:24:57.060   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0
00:24:57.060   19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock
00:24:57.626  [2024-12-13 19:09:29.184658] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:24:57.626  [2024-12-13 19:09:29.200739] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 ***
00:24:57.626  nvme0n1 nvme0n2
00:24:57.626  nvme1n1
00:24:57.626    19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect
00:24:57.626    19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr
00:24:57.626    19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:24:57.626    19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme*
00:24:57.626    19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]]
00:24:57.626    19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]]
00:24:57.626    19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0
00:24:57.626    19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0
00:24:57.626   19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0
00:24:57.626   19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1
00:24:57.626   19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0
00:24:57.626   19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1
00:24:57.626   19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:24:57.626   19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']'
00:24:57.626   19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1
00:24:57.627   19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1
00:24:59.004   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:24:59.004   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1
00:24:59.004   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:24:59.004   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0
00:24:59.005    19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid c58ccb8f-7393-489c-ba54-065150109f78
00:24:59.005    19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d -
00:24:59.005    19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1
00:24:59.005    19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid
00:24:59.005     19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json
00:24:59.005     19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid
00:24:59.005    19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c58ccb8f7393489cba54065150109f78
00:24:59.005    19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C58CCB8F7393489CBA54065150109F78
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ C58CCB8F7393489CBA54065150109F78 == \C\5\8\C\C\B\8\F\7\3\9\3\4\8\9\C\B\A\5\4\0\6\5\1\5\0\1\0\9\F\7\8 ]]
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0
00:24:59.005    19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 16bbde46-119b-496c-9ad1-7c463d7d1b9d
00:24:59.005    19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d -
00:24:59.005    19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2
00:24:59.005    19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid
00:24:59.005     19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json
00:24:59.005     19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid
00:24:59.005    19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=16bbde46119b496c9ad17c463d7d1b9d
00:24:59.005    19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 16BBDE46119B496C9AD17C463D7D1B9D
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 16BBDE46119B496C9AD17C463D7D1B9D == \1\6\B\B\D\E\4\6\1\1\9\B\4\9\6\C\9\A\D\1\7\C\4\6\3\D\7\D\1\B\9\D ]]
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0
00:24:59.005    19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 4a52962b-57ee-45c4-ada8-e21ad510f808
00:24:59.005    19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d -
00:24:59.005    19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3
00:24:59.005    19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid
00:24:59.005     19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid
00:24:59.005     19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json
00:24:59.005    19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4a52962b57ee45c4ada8e21ad510f808
00:24:59.005    19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4A52962B57EE45C4ADA8E21AD510F808
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 4A52962B57EE45C4ADA8E21AD510F808 == \4\A\5\2\9\6\2\B\5\7\E\E\4\5\C\4\A\D\A\8\E\2\1\A\D\5\1\0\F\8\0\8 ]]
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 107002
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 107002 ']'
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 107002
00:24:59.005    19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname
00:24:59.005   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:24:59.005    19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107002
00:24:59.264  killing process with pid 107002
00:24:59.264   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:24:59.264   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:24:59.264   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107002'
00:24:59.264   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 107002
00:24:59.264   19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 107002
00:24:59.832   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini
00:24:59.832   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup
00:24:59.832   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync
00:24:59.832   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:24:59.832   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e
00:24:59.832   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20}
00:24:59.832   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:24:59.832  rmmod nvme_tcp
00:24:59.832  rmmod nvme_fabrics
00:24:59.832  rmmod nvme_keyring
00:24:59.832   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:24:59.832   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e
00:24:59.832   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0
00:24:59.832   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 106973 ']'
00:24:59.832   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 106973
00:24:59.832   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 106973 ']'
00:24:59.832   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 106973
00:24:59.832    19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname
00:24:59.832   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:24:59.832    19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106973
00:24:59.832  killing process with pid 106973
00:24:59.832   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:24:59.832   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:24:59.832   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106973'
00:24:59.832   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 106973
00:24:59.832   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 106973
00:25:00.092   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:25:00.092   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:25:00.092   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:25:00.092   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr
00:25:00.092   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save
00:25:00.092   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:25:00.092   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore
00:25:00.092   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:25:00.092   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:25:00.092   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:25:00.092   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:25:00.092   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:25:00.092   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:25:00.092   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:25:00.092   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:25:00.092   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:25:00.092   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:25:00.092   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:25:00.092   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:25:00.092   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:25:00.092   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:25:00.351   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:25:00.351   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns
00:25:00.351   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:00.351   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:25:00.351    19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:25:00.351   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0
00:25:00.351  
00:25:00.351  real	0m4.869s
00:25:00.351  user	0m7.620s
00:25:00.351  sys	0m1.448s
00:25:00.351   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable
00:25:00.351   19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:25:00.351  ************************************
00:25:00.351  END TEST nvmf_nsid
00:25:00.351  ************************************
00:25:00.351   19:09:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:25:00.351  ************************************
00:25:00.351  END TEST nvmf_target_extra
00:25:00.351  ************************************
00:25:00.351  
00:25:00.351  real	13m33.355s
00:25:00.351  user	41m36.921s
00:25:00.351  sys	2m21.691s
00:25:00.351   19:09:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable
00:25:00.351   19:09:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:25:00.351   19:09:32 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp
00:25:00.351   19:09:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:25:00.351   19:09:32 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable
00:25:00.351   19:09:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:25:00.351  ************************************
00:25:00.351  START TEST nvmf_host
00:25:00.351  ************************************
00:25:00.352   19:09:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp
00:25:00.352  * Looking for test storage...
00:25:00.352  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf
00:25:00.352    19:09:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:25:00.352     19:09:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version
00:25:00.352     19:09:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-:
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-:
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<'
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 ))
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:00.611     19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1
00:25:00.611     19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1
00:25:00.611     19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:00.611     19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1
00:25:00.611     19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2
00:25:00.611     19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2
00:25:00.611     19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:00.611     19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:25:00.611  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:00.611  		--rc genhtml_branch_coverage=1
00:25:00.611  		--rc genhtml_function_coverage=1
00:25:00.611  		--rc genhtml_legend=1
00:25:00.611  		--rc geninfo_all_blocks=1
00:25:00.611  		--rc geninfo_unexecuted_blocks=1
00:25:00.611  		
00:25:00.611  		'
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:25:00.611  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:00.611  		--rc genhtml_branch_coverage=1
00:25:00.611  		--rc genhtml_function_coverage=1
00:25:00.611  		--rc genhtml_legend=1
00:25:00.611  		--rc geninfo_all_blocks=1
00:25:00.611  		--rc geninfo_unexecuted_blocks=1
00:25:00.611  		
00:25:00.611  		'
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:25:00.611  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:00.611  		--rc genhtml_branch_coverage=1
00:25:00.611  		--rc genhtml_function_coverage=1
00:25:00.611  		--rc genhtml_legend=1
00:25:00.611  		--rc geninfo_all_blocks=1
00:25:00.611  		--rc geninfo_unexecuted_blocks=1
00:25:00.611  		
00:25:00.611  		'
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:25:00.611  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:00.611  		--rc genhtml_branch_coverage=1
00:25:00.611  		--rc genhtml_function_coverage=1
00:25:00.611  		--rc genhtml_legend=1
00:25:00.611  		--rc geninfo_all_blocks=1
00:25:00.611  		--rc geninfo_unexecuted_blocks=1
00:25:00.611  		
00:25:00.611  		'
00:25:00.611   19:09:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:25:00.611     19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:25:00.611     19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:25:00.611    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:25:00.611     19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob
00:25:00.611     19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:00.611     19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:00.611     19:09:32 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:00.611      19:09:32 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:00.611      19:09:32 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:00.611      19:09:32 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:00.611      19:09:32 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH
00:25:00.612      19:09:32 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:00.612    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0
00:25:00.612    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:25:00.612    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:25:00.612    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:25:00.612    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:25:00.612    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:25:00.612    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:25:00.612  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:25:00.612    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:25:00.612    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:25:00.612    19:09:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0
00:25:00.612   19:09:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT
00:25:00.612   19:09:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@")
00:25:00.612   19:09:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]]
00:25:00.612   19:09:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp
00:25:00.612   19:09:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:25:00.612   19:09:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:25:00.612   19:09:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:25:00.612  ************************************
00:25:00.612  START TEST nvmf_multicontroller
00:25:00.612  ************************************
00:25:00.612   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp
00:25:00.612  * Looking for test storage...
00:25:00.612  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:25:00.612    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:25:00.612     19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version
00:25:00.612     19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-:
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-:
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<'
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 ))
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:00.872     19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1
00:25:00.872     19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1
00:25:00.872     19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:00.872     19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1
00:25:00.872     19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2
00:25:00.872     19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2
00:25:00.872     19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:00.872     19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:25:00.872  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:00.872  		--rc genhtml_branch_coverage=1
00:25:00.872  		--rc genhtml_function_coverage=1
00:25:00.872  		--rc genhtml_legend=1
00:25:00.872  		--rc geninfo_all_blocks=1
00:25:00.872  		--rc geninfo_unexecuted_blocks=1
00:25:00.872  		
00:25:00.872  		'
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:25:00.872  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:00.872  		--rc genhtml_branch_coverage=1
00:25:00.872  		--rc genhtml_function_coverage=1
00:25:00.872  		--rc genhtml_legend=1
00:25:00.872  		--rc geninfo_all_blocks=1
00:25:00.872  		--rc geninfo_unexecuted_blocks=1
00:25:00.872  		
00:25:00.872  		'
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:25:00.872  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:00.872  		--rc genhtml_branch_coverage=1
00:25:00.872  		--rc genhtml_function_coverage=1
00:25:00.872  		--rc genhtml_legend=1
00:25:00.872  		--rc geninfo_all_blocks=1
00:25:00.872  		--rc geninfo_unexecuted_blocks=1
00:25:00.872  		
00:25:00.872  		'
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:25:00.872  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:00.872  		--rc genhtml_branch_coverage=1
00:25:00.872  		--rc genhtml_function_coverage=1
00:25:00.872  		--rc genhtml_legend=1
00:25:00.872  		--rc geninfo_all_blocks=1
00:25:00.872  		--rc geninfo_unexecuted_blocks=1
00:25:00.872  		
00:25:00.872  		'
00:25:00.872   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:25:00.872     19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:25:00.872     19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:25:00.872     19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob
00:25:00.872     19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:00.872     19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:00.872     19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:00.872      19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:00.872      19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:00.872      19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:00.872      19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH
00:25:00.872      19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:25:00.872  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:25:00.872    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0
00:25:00.872   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64
00:25:00.872   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:25:00.872   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000
00:25:00.872   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001
00:25:00.872   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:25:00.872   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']'
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:25:00.873    19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@460 -- # nvmf_veth_init
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:25:00.873  Cannot find device "nvmf_init_br"
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:25:00.873  Cannot find device "nvmf_init_br2"
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:25:00.873  Cannot find device "nvmf_tgt_br"
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # true
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:25:00.873  Cannot find device "nvmf_tgt_br2"
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # true
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:25:00.873  Cannot find device "nvmf_init_br"
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # true
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:25:00.873  Cannot find device "nvmf_init_br2"
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # true
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:25:00.873  Cannot find device "nvmf_tgt_br"
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # true
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:25:00.873  Cannot find device "nvmf_tgt_br2"
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # true
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:25:00.873  Cannot find device "nvmf_br"
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # true
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:25:00.873  Cannot find device "nvmf_init_if"
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # true
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:25:00.873  Cannot find device "nvmf_init_if2"
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # true
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:25:00.873  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # true
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:25:00.873  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # true
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:25:00.873   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:25:01.132   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:25:01.132   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:25:01.132   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:25:01.132   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:25:01.132   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:25:01.132   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:25:01.133  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:25:01.133  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms
00:25:01.133  
00:25:01.133  --- 10.0.0.3 ping statistics ---
00:25:01.133  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:01.133  rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:25:01.133  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:25:01.133  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms
00:25:01.133  
00:25:01.133  --- 10.0.0.4 ping statistics ---
00:25:01.133  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:01.133  rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:25:01.133  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:25:01.133  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms
00:25:01.133  
00:25:01.133  --- 10.0.0.1 ping statistics ---
00:25:01.133  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:01.133  rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:25:01.133  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:25:01.133  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms
00:25:01.133  
00:25:01.133  --- 10.0.0.2 ping statistics ---
00:25:01.133  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:01.133  rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@461 -- # return 0
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=107382
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 107382
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 107382 ']'
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:01.133  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable
00:25:01.133   19:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:01.392  [2024-12-13 19:09:32.982999] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:25:01.392  [2024-12-13 19:09:32.983090] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:25:01.392  [2024-12-13 19:09:33.141436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:25:01.392  [2024-12-13 19:09:33.204449] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:25:01.392  [2024-12-13 19:09:33.204530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:25:01.392  [2024-12-13 19:09:33.204544] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:25:01.392  [2024-12-13 19:09:33.204555] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:25:01.392  [2024-12-13 19:09:33.204569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:25:01.392  [2024-12-13 19:09:33.206180] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:25:01.393  [2024-12-13 19:09:33.206322] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:25:01.393  [2024-12-13 19:09:33.206336] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:25:02.330   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:25:02.330   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0
00:25:02.330   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:25:02.330   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable
00:25:02.330   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:02.330   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:25:02.330   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:25:02.330   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:02.330   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:02.330  [2024-12-13 19:09:34.094360] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:25:02.330   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:02.330   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:25:02.330   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:02.330   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:02.330  Malloc0
00:25:02.330   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:02.330   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:25:02.330   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:02.330   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:02.600  [2024-12-13 19:09:34.168794] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:02.600  [2024-12-13 19:09:34.180694] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 ***
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:02.600  Malloc1
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4421
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=107434
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 107434 /var/tmp/bdevperf.sock
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 107434 ']'
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:25:02.600   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100
00:25:02.601   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:25:02.601  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:25:02.601   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable
00:25:02.601   19:09:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:03.614   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:25:03.614   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0
00:25:03.614   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1
00:25:03.614   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:03.614   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:03.614  NVMe0n1
00:25:03.614   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:03.614   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:25:03.614   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:03.614   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:03.614   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe
00:25:03.614   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:03.614  1
00:25:03.614   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001
00:25:03.614   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0
00:25:03.614   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001
00:25:03.614   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:25:03.615   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:25:03.615    19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:25:03.615   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:25:03.615   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001
00:25:03.615   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:03.615   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:03.873  2024/12/13 19:09:35 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path
00:25:03.873  request:
00:25:03.873  {
00:25:03.873  "method": "bdev_nvme_attach_controller",
00:25:03.873  "params": {
00:25:03.873  "name": "NVMe0",
00:25:03.873  "trtype": "tcp",
00:25:03.873  "traddr": "10.0.0.3",
00:25:03.873  "adrfam": "ipv4",
00:25:03.873  "trsvcid": "4420",
00:25:03.873  "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:25:03.873  "hostnqn": "nqn.2021-09-7.io.spdk:00001",
00:25:03.873  "hostaddr": "10.0.0.1",
00:25:03.873  "prchk_reftag": false,
00:25:03.873  "prchk_guard": false,
00:25:03.873  "hdgst": false,
00:25:03.873  "ddgst": false,
00:25:03.873  "allow_unrecognized_csi": false
00:25:03.873  }
00:25:03.873  }
00:25:03.873  Got JSON-RPC error response
00:25:03.873  GoRPCClient: error on JSON-RPC call
00:25:03.873   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:25:03.873   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1
00:25:03.873   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:25:03.873   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:25:03.873   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:25:03.873   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:25:03.874    19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:03.874  2024/12/13 19:09:35 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path
00:25:03.874  request:
00:25:03.874  {
00:25:03.874  "method": "bdev_nvme_attach_controller",
00:25:03.874  "params": {
00:25:03.874  "name": "NVMe0",
00:25:03.874  "trtype": "tcp",
00:25:03.874  "traddr": "10.0.0.3",
00:25:03.874  "adrfam": "ipv4",
00:25:03.874  "trsvcid": "4420",
00:25:03.874  "subnqn": "nqn.2016-06.io.spdk:cnode2",
00:25:03.874  "hostaddr": "10.0.0.1",
00:25:03.874  "prchk_reftag": false,
00:25:03.874  "prchk_guard": false,
00:25:03.874  "hdgst": false,
00:25:03.874  "ddgst": false,
00:25:03.874  "allow_unrecognized_csi": false
00:25:03.874  }
00:25:03.874  }
00:25:03.874  Got JSON-RPC error response
00:25:03.874  GoRPCClient: error on JSON-RPC call
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:25:03.874    19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:03.874  2024/12/13 19:09:35 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled
00:25:03.874  request:
00:25:03.874  {
00:25:03.874  "method": "bdev_nvme_attach_controller",
00:25:03.874  "params": {
00:25:03.874  "name": "NVMe0",
00:25:03.874  "trtype": "tcp",
00:25:03.874  "traddr": "10.0.0.3",
00:25:03.874  "adrfam": "ipv4",
00:25:03.874  "trsvcid": "4420",
00:25:03.874  "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:25:03.874  "hostaddr": "10.0.0.1",
00:25:03.874  "prchk_reftag": false,
00:25:03.874  "prchk_guard": false,
00:25:03.874  "hdgst": false,
00:25:03.874  "ddgst": false,
00:25:03.874  "multipath": "disable",
00:25:03.874  "allow_unrecognized_csi": false
00:25:03.874  }
00:25:03.874  }
00:25:03.874  Got JSON-RPC error response
00:25:03.874  GoRPCClient: error on JSON-RPC call
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:25:03.874    19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:03.874  2024/12/13 19:09:35 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path
00:25:03.874  request:
00:25:03.874  {
00:25:03.874  "method": "bdev_nvme_attach_controller",
00:25:03.874  "params": {
00:25:03.874  "name": "NVMe0",
00:25:03.874  "trtype": "tcp",
00:25:03.874  "traddr": "10.0.0.3",
00:25:03.874  "adrfam": "ipv4",
00:25:03.874  "trsvcid": "4420",
00:25:03.874  "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:25:03.874  "hostaddr": "10.0.0.1",
00:25:03.874  "prchk_reftag": false,
00:25:03.874  "prchk_guard": false,
00:25:03.874  "hdgst": false,
00:25:03.874  "ddgst": false,
00:25:03.874  "multipath": "failover",
00:25:03.874  "allow_unrecognized_csi": false
00:25:03.874  }
00:25:03.874  }
00:25:03.874  Got JSON-RPC error response
00:25:03.874  GoRPCClient: error on JSON-RPC call
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:03.874  NVMe0n1
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:03.874  
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:03.874    19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:25:03.874    19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:03.874    19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:03.874    19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe
00:25:03.874    19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']'
00:25:03.874   19:09:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:25:05.302  {
00:25:05.302    "results": [
00:25:05.302      {
00:25:05.302        "job": "NVMe0n1",
00:25:05.302        "core_mask": "0x1",
00:25:05.302        "workload": "write",
00:25:05.302        "status": "finished",
00:25:05.302        "queue_depth": 128,
00:25:05.302        "io_size": 4096,
00:25:05.302        "runtime": 1.009722,
00:25:05.302        "iops": 19073.56678372859,
00:25:05.302        "mibps": 74.50612024893981,
00:25:05.302        "io_failed": 0,
00:25:05.302        "io_timeout": 0,
00:25:05.302        "avg_latency_us": 6700.453072141006,
00:25:05.302        "min_latency_us": 2740.5963636363635,
00:25:05.302        "max_latency_us": 20971.52
00:25:05.302      }
00:25:05.302    ],
00:25:05.302    "core_count": 1
00:25:05.302  }
00:25:05.302   19:09:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1
00:25:05.302   19:09:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:05.302   19:09:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:05.302   19:09:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:05.302   19:09:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.2 ]]
00:25:05.302   19:09:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1
00:25:05.302   19:09:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:05.302   19:09:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:05.302  nvme1n1
00:25:05.302   19:09:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:05.302    19:09:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2
00:25:05.302    19:09:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr'
00:25:05.302    19:09:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:05.302    19:09:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:05.302    19:09:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:05.302   19:09:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]]
00:25:05.302   19:09:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1
00:25:05.302   19:09:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:05.302   19:09:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:05.302   19:09:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:05.302   19:09:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2
00:25:05.302   19:09:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:05.302   19:09:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:05.302  nvme1n1
00:25:05.302   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:05.302    19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2
00:25:05.302    19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:05.302    19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr'
00:25:05.302    19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:05.302    19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:05.302   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.2 == \1\0\.\0\.\0\.\2 ]]
00:25:05.302   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 107434
00:25:05.302   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 107434 ']'
00:25:05.302   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 107434
00:25:05.302    19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname
00:25:05.561   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:25:05.561    19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107434
00:25:05.561  killing process with pid 107434
00:25:05.561   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:25:05.561   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:25:05.561   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107434'
00:25:05.561   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 107434
00:25:05.561   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 107434
00:25:05.561   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:25:05.561   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:05.561   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:05.561   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:05.561   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2
00:25:05.561   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:05.561   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:05.562   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:05.562   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT
00:25:05.562   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt
00:25:05.562   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file
00:25:05.562    19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f
00:25:05.562    19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u
00:25:05.562   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat
00:25:05.562  --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt ---
00:25:05.562  [2024-12-13 19:09:34.329138] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:25:05.562  [2024-12-13 19:09:34.329265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107434 ]
00:25:05.562  [2024-12-13 19:09:34.485654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:05.562  [2024-12-13 19:09:34.539173] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:25:05.562  [2024-12-13 19:09:35.649739] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name fd66a917-1a19-4e4c-a528-a25a80be3eea already exists
00:25:05.562  [2024-12-13 19:09:35.650167] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:fd66a917-1a19-4e4c-a528-a25a80be3eea alias for bdev NVMe1n1
00:25:05.562  [2024-12-13 19:09:35.650199] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed
00:25:05.562  Running I/O for 1 seconds...
00:25:05.562      19004.00 IOPS,    74.23 MiB/s
00:25:05.562                                                                                                  Latency(us)
00:25:05.562  
[2024-12-13T19:09:37.386Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:05.562  Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096)
00:25:05.562  	 NVMe0n1             :       1.01   19073.57      74.51       0.00     0.00    6700.45    2740.60   20971.52
00:25:05.562  
[2024-12-13T19:09:37.386Z]  ===================================================================================================================
00:25:05.562  
[2024-12-13T19:09:37.386Z]  Total                       :              19073.57      74.51       0.00     0.00    6700.45    2740.60   20971.52
00:25:05.562  Received shutdown signal, test time was about 1.000000 seconds
00:25:05.562  
00:25:05.562                                                                                                  Latency(us)
00:25:05.562  
[2024-12-13T19:09:37.386Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:05.562  
[2024-12-13T19:09:37.386Z]  ===================================================================================================================
00:25:05.562  
[2024-12-13T19:09:37.386Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:25:05.562  --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt ---
00:25:05.562   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt
00:25:05.562   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file
00:25:05.562   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini
00:25:05.562   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup
00:25:05.562   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync
00:25:05.821   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:25:05.821   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e
00:25:05.821   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20}
00:25:05.821   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:25:05.821  rmmod nvme_tcp
00:25:05.821  rmmod nvme_fabrics
00:25:05.821  rmmod nvme_keyring
00:25:05.821   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:25:05.821   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e
00:25:05.821   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0
00:25:05.821   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 107382 ']'
00:25:05.821   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 107382
00:25:05.821   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 107382 ']'
00:25:05.821   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 107382
00:25:05.821    19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname
00:25:05.821   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:25:05.821    19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107382
00:25:05.821  killing process with pid 107382
00:25:05.821   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:25:05.821   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:25:05.821   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107382'
00:25:05.821   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 107382
00:25:05.821   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 107382
00:25:06.080   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:25:06.080   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:25:06.080   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:25:06.080   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr
00:25:06.080   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:25:06.080   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save
00:25:06.080   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore
00:25:06.080   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:25:06.080   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:25:06.080   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:25:06.080   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:25:06.080   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:25:06.080   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:25:06.080   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:25:06.080   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:25:06.080   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:25:06.080   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:25:06.080   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:25:06.080   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:25:06.339   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:25:06.339   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:25:06.339   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:25:06.339   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # remove_spdk_ns
00:25:06.339   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:06.339   19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:25:06.339    19:09:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:25:06.339   19:09:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # return 0
00:25:06.339  
00:25:06.339  real	0m5.734s
00:25:06.339  user	0m17.507s
00:25:06.339  sys	0m1.342s
00:25:06.339   19:09:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable
00:25:06.339   19:09:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:25:06.339  ************************************
00:25:06.339  END TEST nvmf_multicontroller
00:25:06.339  ************************************
00:25:06.339   19:09:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp
00:25:06.339   19:09:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:25:06.339   19:09:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:25:06.339   19:09:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:25:06.339  ************************************
00:25:06.339  START TEST nvmf_aer
00:25:06.339  ************************************
00:25:06.339   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp
00:25:06.339  * Looking for test storage...
00:25:06.339  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:25:06.339    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:25:06.598     19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version
00:25:06.598     19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-:
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-:
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<'
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 ))
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:06.598     19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1
00:25:06.598     19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1
00:25:06.598     19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:06.598     19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1
00:25:06.598     19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2
00:25:06.598     19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2
00:25:06.598     19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:06.598     19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:25:06.598  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:06.598  		--rc genhtml_branch_coverage=1
00:25:06.598  		--rc genhtml_function_coverage=1
00:25:06.598  		--rc genhtml_legend=1
00:25:06.598  		--rc geninfo_all_blocks=1
00:25:06.598  		--rc geninfo_unexecuted_blocks=1
00:25:06.598  		
00:25:06.598  		'
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:25:06.598  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:06.598  		--rc genhtml_branch_coverage=1
00:25:06.598  		--rc genhtml_function_coverage=1
00:25:06.598  		--rc genhtml_legend=1
00:25:06.598  		--rc geninfo_all_blocks=1
00:25:06.598  		--rc geninfo_unexecuted_blocks=1
00:25:06.598  		
00:25:06.598  		'
00:25:06.598    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:25:06.599  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:06.599  		--rc genhtml_branch_coverage=1
00:25:06.599  		--rc genhtml_function_coverage=1
00:25:06.599  		--rc genhtml_legend=1
00:25:06.599  		--rc geninfo_all_blocks=1
00:25:06.599  		--rc geninfo_unexecuted_blocks=1
00:25:06.599  		
00:25:06.599  		'
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:25:06.599  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:06.599  		--rc genhtml_branch_coverage=1
00:25:06.599  		--rc genhtml_function_coverage=1
00:25:06.599  		--rc genhtml_legend=1
00:25:06.599  		--rc geninfo_all_blocks=1
00:25:06.599  		--rc geninfo_unexecuted_blocks=1
00:25:06.599  		
00:25:06.599  		'
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:25:06.599     19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:25:06.599     19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:25:06.599     19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob
00:25:06.599     19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:06.599     19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:06.599     19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:06.599      19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:06.599      19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:06.599      19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:06.599      19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH
00:25:06.599      19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:25:06.599  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:25:06.599    19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@460 -- # nvmf_veth_init
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:25:06.599  Cannot find device "nvmf_init_br"
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:25:06.599  Cannot find device "nvmf_init_br2"
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:25:06.599  Cannot find device "nvmf_tgt_br"
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # true
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:25:06.599  Cannot find device "nvmf_tgt_br2"
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # true
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:25:06.599  Cannot find device "nvmf_init_br"
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # true
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:25:06.599  Cannot find device "nvmf_init_br2"
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # true
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:25:06.599  Cannot find device "nvmf_tgt_br"
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # true
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:25:06.599  Cannot find device "nvmf_tgt_br2"
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # true
00:25:06.599   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:25:06.859  Cannot find device "nvmf_br"
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # true
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:25:06.859  Cannot find device "nvmf_init_if"
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # true
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:25:06.859  Cannot find device "nvmf_init_if2"
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # true
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:25:06.859  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # true
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:25:06.859  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # true
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:25:06.859  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:25:06.859  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms
00:25:06.859  
00:25:06.859  --- 10.0.0.3 ping statistics ---
00:25:06.859  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:06.859  rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:25:06.859  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:25:06.859  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.092 ms
00:25:06.859  
00:25:06.859  --- 10.0.0.4 ping statistics ---
00:25:06.859  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:06.859  rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:25:06.859  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:25:06.859  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms
00:25:06.859  
00:25:06.859  --- 10.0.0.1 ping statistics ---
00:25:06.859  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:06.859  rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:25:06.859  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:25:06.859  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms
00:25:06.859  
00:25:06.859  --- 10.0.0.2 ping statistics ---
00:25:06.859  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:06.859  rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@461 -- # return 0
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:25:06.859   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:25:07.118   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF
00:25:07.118   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:25:07.118   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable
00:25:07.118   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:25:07.118   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=107758
00:25:07.118   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:25:07.118   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 107758
00:25:07.118   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 107758 ']'
00:25:07.118   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:07.118   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100
00:25:07.118  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:25:07.118   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:25:07.118   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable
00:25:07.118   19:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:25:07.118  [2024-12-13 19:09:38.765850] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:25:07.118  [2024-12-13 19:09:38.765946] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:25:07.118  [2024-12-13 19:09:38.924088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:25:07.377  [2024-12-13 19:09:38.964384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:25:07.377  [2024-12-13 19:09:38.964450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:25:07.377  [2024-12-13 19:09:38.964464] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:25:07.377  [2024-12-13 19:09:38.964474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:25:07.377  [2024-12-13 19:09:38.964484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:25:07.377  [2024-12-13 19:09:38.965668] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:25:07.377  [2024-12-13 19:09:38.966554] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:25:07.377  [2024-12-13 19:09:38.966737] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:25:07.377  [2024-12-13 19:09:38.966746] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:25:07.377   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:25:07.377   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0
00:25:07.377   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:25:07.377   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable
00:25:07.377   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:25:07.377   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:25:07.377   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:25:07.378   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:07.378   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:25:07.378  [2024-12-13 19:09:39.149288] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:25:07.378   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:07.378   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0
00:25:07.378   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:07.378   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:25:07.378  Malloc0
00:25:07.378   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:07.378   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2
00:25:07.378   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:07.378   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:25:07.637  [2024-12-13 19:09:39.213808] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:25:07.637  [
00:25:07.637  {
00:25:07.637  "allow_any_host": true,
00:25:07.637  "hosts": [],
00:25:07.637  "listen_addresses": [],
00:25:07.637  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:25:07.637  "subtype": "Discovery"
00:25:07.637  },
00:25:07.637  {
00:25:07.637  "allow_any_host": true,
00:25:07.637  "hosts": [],
00:25:07.637  "listen_addresses": [
00:25:07.637  {
00:25:07.637  "adrfam": "IPv4",
00:25:07.637  "traddr": "10.0.0.3",
00:25:07.637  "trsvcid": "4420",
00:25:07.637  "trtype": "TCP"
00:25:07.637  }
00:25:07.637  ],
00:25:07.637  "max_cntlid": 65519,
00:25:07.637  "max_namespaces": 2,
00:25:07.637  "min_cntlid": 1,
00:25:07.637  "model_number": "SPDK bdev Controller",
00:25:07.637  "namespaces": [
00:25:07.637  {
00:25:07.637  "bdev_name": "Malloc0",
00:25:07.637  "name": "Malloc0",
00:25:07.637  "nguid": "46094BC3C0824FB9808FD5146B501146",
00:25:07.637  "nsid": 1,
00:25:07.637  "uuid": "46094bc3-c082-4fb9-808f-d5146b501146"
00:25:07.637  }
00:25:07.637  ],
00:25:07.637  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:25:07.637  "serial_number": "SPDK00000000000001",
00:25:07.637  "subtype": "NVMe"
00:25:07.637  }
00:25:07.637  ]
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=107793
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r '        trtype:tcp         adrfam:IPv4         traddr:10.0.0.3         trsvcid:4420         subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']'
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']'
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:07.637   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:25:07.896  Malloc1
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:25:07.896  [
00:25:07.896  {
00:25:07.896  "allow_any_host": true,
00:25:07.896  "hosts": [],
00:25:07.896  "listen_addresses": [],
00:25:07.896  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:25:07.896  "subtype": "Discovery"
00:25:07.896  },
00:25:07.896  {
00:25:07.896  "allow_any_host": true,
00:25:07.896  "hosts": [],
00:25:07.896  "listen_addresses": [
00:25:07.896  {
00:25:07.896  "adrfam": "IPv4",
00:25:07.896  "traddr": "10.0.0.3",
00:25:07.896  "trsvcid": "4420",
00:25:07.896  "trtype": "TCP"
00:25:07.896  }
00:25:07.896  ],
00:25:07.896  "max_cntlid": 65519,
00:25:07.896  "max_namespaces": 2,
00:25:07.896  "min_cntlid": 1,
00:25:07.896  "model_number": "SPDK bdev Controller",
00:25:07.896  "namespaces": [
00:25:07.896  {
00:25:07.896  "bdev_name": "Malloc0",
00:25:07.896  Asynchronous Event Request test
00:25:07.896  Attaching to 10.0.0.3
00:25:07.896  Attached to 10.0.0.3
00:25:07.896  Registering asynchronous event callbacks...
00:25:07.896  Starting namespace attribute notice tests for all controllers...
00:25:07.896  10.0.0.3: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00
00:25:07.896  aer_cb - Changed Namespace
00:25:07.896  Cleaning up...
00:25:07.896  "name": "Malloc0",
00:25:07.896  "nguid": "46094BC3C0824FB9808FD5146B501146",
00:25:07.896  "nsid": 1,
00:25:07.896  "uuid": "46094bc3-c082-4fb9-808f-d5146b501146"
00:25:07.896  },
00:25:07.896  {
00:25:07.896  "bdev_name": "Malloc1",
00:25:07.896  "name": "Malloc1",
00:25:07.896  "nguid": "1A489EFEBDA34385A94D2DBAFBEC74C0",
00:25:07.896  "nsid": 2,
00:25:07.896  "uuid": "1a489efe-bda3-4385-a94d-2dbafbec74c0"
00:25:07.896  }
00:25:07.896  ],
00:25:07.896  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:25:07.896  "serial_number": "SPDK00000000000001",
00:25:07.896  "subtype": "NVMe"
00:25:07.896  }
00:25:07.896  ]
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 107793
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20}
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:25:07.896  rmmod nvme_tcp
00:25:07.896  rmmod nvme_fabrics
00:25:07.896  rmmod nvme_keyring
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 107758 ']'
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 107758
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 107758 ']'
00:25:07.896   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 107758
00:25:08.155    19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname
00:25:08.155   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:25:08.155    19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107758
00:25:08.155  killing process with pid 107758
00:25:08.155   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:25:08.155   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:25:08.155   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107758'
00:25:08.155   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 107758
00:25:08.155   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 107758
00:25:08.155   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:25:08.155   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:25:08.155   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:25:08.155   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr
00:25:08.155   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save
00:25:08.155   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:25:08.155   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore
00:25:08.155   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:25:08.155   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:25:08.155   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:25:08.414   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:25:08.414   19:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:25:08.414   19:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:25:08.414   19:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:25:08.414   19:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:25:08.414   19:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:25:08.414   19:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:25:08.414   19:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:25:08.414   19:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:25:08.414   19:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:25:08.414   19:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:25:08.414   19:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:25:08.414   19:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # remove_spdk_ns
00:25:08.414   19:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:08.414   19:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:25:08.414    19:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:25:08.414   19:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # return 0
00:25:08.414  
00:25:08.414  real	0m2.128s
00:25:08.414  user	0m4.197s
00:25:08.414  sys	0m0.713s
00:25:08.414  ************************************
00:25:08.414  END TEST nvmf_aer
00:25:08.414  ************************************
00:25:08.414   19:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable
00:25:08.414   19:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:25:08.673   19:09:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp
00:25:08.673   19:09:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:25:08.673   19:09:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:25:08.673   19:09:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:25:08.673  ************************************
00:25:08.673  START TEST nvmf_async_init
00:25:08.673  ************************************
00:25:08.673   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp
00:25:08.673  * Looking for test storage...
00:25:08.673  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:25:08.674     19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:25:08.674     19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-:
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-:
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<'
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 ))
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:08.674     19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1
00:25:08.674     19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1
00:25:08.674     19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:08.674     19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1
00:25:08.674     19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2
00:25:08.674     19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2
00:25:08.674     19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:08.674     19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:25:08.674  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:08.674  		--rc genhtml_branch_coverage=1
00:25:08.674  		--rc genhtml_function_coverage=1
00:25:08.674  		--rc genhtml_legend=1
00:25:08.674  		--rc geninfo_all_blocks=1
00:25:08.674  		--rc geninfo_unexecuted_blocks=1
00:25:08.674  		
00:25:08.674  		'
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:25:08.674  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:08.674  		--rc genhtml_branch_coverage=1
00:25:08.674  		--rc genhtml_function_coverage=1
00:25:08.674  		--rc genhtml_legend=1
00:25:08.674  		--rc geninfo_all_blocks=1
00:25:08.674  		--rc geninfo_unexecuted_blocks=1
00:25:08.674  		
00:25:08.674  		'
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:25:08.674  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:08.674  		--rc genhtml_branch_coverage=1
00:25:08.674  		--rc genhtml_function_coverage=1
00:25:08.674  		--rc genhtml_legend=1
00:25:08.674  		--rc geninfo_all_blocks=1
00:25:08.674  		--rc geninfo_unexecuted_blocks=1
00:25:08.674  		
00:25:08.674  		'
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:25:08.674  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:08.674  		--rc genhtml_branch_coverage=1
00:25:08.674  		--rc genhtml_function_coverage=1
00:25:08.674  		--rc genhtml_legend=1
00:25:08.674  		--rc geninfo_all_blocks=1
00:25:08.674  		--rc geninfo_unexecuted_blocks=1
00:25:08.674  		
00:25:08.674  		'
00:25:08.674   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:25:08.674     19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:25:08.674     19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:25:08.674     19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob
00:25:08.674     19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:08.674     19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:08.674     19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:08.674      19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:08.674      19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:08.674      19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:08.674      19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH
00:25:08.674      19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:25:08.674  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0
00:25:08.674   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024
00:25:08.674   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512
00:25:08.674   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0
00:25:08.674   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen
00:25:08.674    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d -
00:25:08.674   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=5a8acf4d7ad24f8a8193c32db2a22870
00:25:08.674   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit
00:25:08.674   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:25:08.675    19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@460 -- # nvmf_veth_init
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:25:08.675   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:25:08.934  Cannot find device "nvmf_init_br"
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:25:08.934  Cannot find device "nvmf_init_br2"
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:25:08.934  Cannot find device "nvmf_tgt_br"
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # true
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:25:08.934  Cannot find device "nvmf_tgt_br2"
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # true
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:25:08.934  Cannot find device "nvmf_init_br"
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # true
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:25:08.934  Cannot find device "nvmf_init_br2"
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # true
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:25:08.934  Cannot find device "nvmf_tgt_br"
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # true
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:25:08.934  Cannot find device "nvmf_tgt_br2"
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # true
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:25:08.934  Cannot find device "nvmf_br"
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # true
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:25:08.934  Cannot find device "nvmf_init_if"
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # true
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:25:08.934  Cannot find device "nvmf_init_if2"
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # true
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:25:08.934  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # true
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:25:08.934  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # true
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:25:08.934   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:25:09.193  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:25:09.193  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms
00:25:09.193  
00:25:09.193  --- 10.0.0.3 ping statistics ---
00:25:09.193  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:09.193  rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:25:09.193  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:25:09.193  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms
00:25:09.193  
00:25:09.193  --- 10.0.0.4 ping statistics ---
00:25:09.193  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:09.193  rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:25:09.193  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:25:09.193  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms
00:25:09.193  
00:25:09.193  --- 10.0.0.1 ping statistics ---
00:25:09.193  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:09.193  rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:25:09.193  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:25:09.193  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms
00:25:09.193  
00:25:09.193  --- 10.0.0.2 ping statistics ---
00:25:09.193  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:09.193  rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@461 -- # return 0
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:25:09.193  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=108024
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 108024
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 108024 ']'
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable
00:25:09.193   19:09:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:25:09.193  [2024-12-13 19:09:40.961629] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:25:09.193  [2024-12-13 19:09:40.961902] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:25:09.452  [2024-12-13 19:09:41.118048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:09.452  [2024-12-13 19:09:41.155415] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:25:09.452  [2024-12-13 19:09:41.155737] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:25:09.452  [2024-12-13 19:09:41.155899] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:25:09.452  [2024-12-13 19:09:41.156131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:25:09.452  [2024-12-13 19:09:41.156146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:25:09.452  [2024-12-13 19:09:41.156610] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:25:09.710  [2024-12-13 19:09:41.340625] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:25:09.710  null0
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5a8acf4d7ad24f8a8193c32db2a22870
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:25:09.710  [2024-12-13 19:09:41.380768] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:09.710   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:25:09.969  nvme0n1
00:25:09.969   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:09.969   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1
00:25:09.969   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:09.969   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:25:09.969  [
00:25:09.969  {
00:25:09.969  "aliases": [
00:25:09.969  "5a8acf4d-7ad2-4f8a-8193-c32db2a22870"
00:25:09.969  ],
00:25:09.969  "assigned_rate_limits": {
00:25:09.969  "r_mbytes_per_sec": 0,
00:25:09.969  "rw_ios_per_sec": 0,
00:25:09.969  "rw_mbytes_per_sec": 0,
00:25:09.969  "w_mbytes_per_sec": 0
00:25:09.969  },
00:25:09.969  "block_size": 512,
00:25:09.969  "claimed": false,
00:25:09.969  "driver_specific": {
00:25:09.969  "mp_policy": "active_passive",
00:25:09.969  "nvme": [
00:25:09.969  {
00:25:09.969  "ctrlr_data": {
00:25:09.969  "ana_reporting": false,
00:25:09.969  "cntlid": 1,
00:25:09.969  "firmware_revision": "25.01",
00:25:09.969  "model_number": "SPDK bdev Controller",
00:25:09.969  "multi_ctrlr": true,
00:25:09.969  "oacs": {
00:25:09.969  "firmware": 0,
00:25:09.969  "format": 0,
00:25:09.969  "ns_manage": 0,
00:25:09.969  "security": 0
00:25:09.969  },
00:25:09.969  "serial_number": "00000000000000000000",
00:25:09.969  "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:25:09.969  "vendor_id": "0x8086"
00:25:09.969  },
00:25:09.969  "ns_data": {
00:25:09.969  "can_share": true,
00:25:09.969  "id": 1
00:25:09.969  },
00:25:09.969  "trid": {
00:25:09.969  "adrfam": "IPv4",
00:25:09.969  "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:25:09.969  "traddr": "10.0.0.3",
00:25:09.969  "trsvcid": "4420",
00:25:09.969  "trtype": "TCP"
00:25:09.969  },
00:25:09.969  "vs": {
00:25:09.969  "nvme_version": "1.3"
00:25:09.969  }
00:25:09.969  }
00:25:09.969  ]
00:25:09.969  },
00:25:09.969  "memory_domains": [
00:25:09.969  {
00:25:09.969  "dma_device_id": "system",
00:25:09.969  "dma_device_type": 1
00:25:09.969  }
00:25:09.969  ],
00:25:09.969  "name": "nvme0n1",
00:25:09.969  "num_blocks": 2097152,
00:25:09.969  "numa_id": -1,
00:25:09.969  "product_name": "NVMe disk",
00:25:09.969  "supported_io_types": {
00:25:09.969  "abort": true,
00:25:09.969  "compare": true,
00:25:09.969  "compare_and_write": true,
00:25:09.969  "copy": true,
00:25:09.969  "flush": true,
00:25:09.969  "get_zone_info": false,
00:25:09.969  "nvme_admin": true,
00:25:09.969  "nvme_io": true,
00:25:09.969  "nvme_io_md": false,
00:25:09.969  "nvme_iov_md": false,
00:25:09.969  "read": true,
00:25:09.969  "reset": true,
00:25:09.969  "seek_data": false,
00:25:09.969  "seek_hole": false,
00:25:09.969  "unmap": false,
00:25:09.969  "write": true,
00:25:09.969  "write_zeroes": true,
00:25:09.969  "zcopy": false,
00:25:09.969  "zone_append": false,
00:25:09.969  "zone_management": false
00:25:09.969  },
00:25:09.969  "uuid": "5a8acf4d-7ad2-4f8a-8193-c32db2a22870",
00:25:09.969  "zoned": false
00:25:09.969  }
00:25:09.969  ]
00:25:09.969   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:09.969   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0
00:25:09.970   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:09.970   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:25:09.970  [2024-12-13 19:09:41.645062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:25:09.970  [2024-12-13 19:09:41.645449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x715720 (9): Bad file descriptor
00:25:09.970  [2024-12-13 19:09:41.777345] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful.
00:25:09.970   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:09.970   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1
00:25:09.970   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:09.970   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:25:10.229  [
00:25:10.229  {
00:25:10.229  "aliases": [
00:25:10.229  "5a8acf4d-7ad2-4f8a-8193-c32db2a22870"
00:25:10.229  ],
00:25:10.229  "assigned_rate_limits": {
00:25:10.229  "r_mbytes_per_sec": 0,
00:25:10.229  "rw_ios_per_sec": 0,
00:25:10.229  "rw_mbytes_per_sec": 0,
00:25:10.229  "w_mbytes_per_sec": 0
00:25:10.229  },
00:25:10.229  "block_size": 512,
00:25:10.229  "claimed": false,
00:25:10.229  "driver_specific": {
00:25:10.229  "mp_policy": "active_passive",
00:25:10.229  "nvme": [
00:25:10.229  {
00:25:10.229  "ctrlr_data": {
00:25:10.229  "ana_reporting": false,
00:25:10.229  "cntlid": 2,
00:25:10.229  "firmware_revision": "25.01",
00:25:10.229  "model_number": "SPDK bdev Controller",
00:25:10.229  "multi_ctrlr": true,
00:25:10.229  "oacs": {
00:25:10.229  "firmware": 0,
00:25:10.229  "format": 0,
00:25:10.229  "ns_manage": 0,
00:25:10.229  "security": 0
00:25:10.229  },
00:25:10.229  "serial_number": "00000000000000000000",
00:25:10.229  "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:25:10.229  "vendor_id": "0x8086"
00:25:10.229  },
00:25:10.229  "ns_data": {
00:25:10.229  "can_share": true,
00:25:10.229  "id": 1
00:25:10.229  },
00:25:10.229  "trid": {
00:25:10.229  "adrfam": "IPv4",
00:25:10.229  "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:25:10.229  "traddr": "10.0.0.3",
00:25:10.229  "trsvcid": "4420",
00:25:10.229  "trtype": "TCP"
00:25:10.229  },
00:25:10.229  "vs": {
00:25:10.229  "nvme_version": "1.3"
00:25:10.229  }
00:25:10.229  }
00:25:10.229  ]
00:25:10.229  },
00:25:10.229  "memory_domains": [
00:25:10.229  {
00:25:10.229  "dma_device_id": "system",
00:25:10.229  "dma_device_type": 1
00:25:10.229  }
00:25:10.229  ],
00:25:10.229  "name": "nvme0n1",
00:25:10.229  "num_blocks": 2097152,
00:25:10.229  "numa_id": -1,
00:25:10.229  "product_name": "NVMe disk",
00:25:10.229  "supported_io_types": {
00:25:10.229  "abort": true,
00:25:10.229  "compare": true,
00:25:10.229  "compare_and_write": true,
00:25:10.229  "copy": true,
00:25:10.229  "flush": true,
00:25:10.229  "get_zone_info": false,
00:25:10.229  "nvme_admin": true,
00:25:10.229  "nvme_io": true,
00:25:10.229  "nvme_io_md": false,
00:25:10.229  "nvme_iov_md": false,
00:25:10.229  "read": true,
00:25:10.229  "reset": true,
00:25:10.229  "seek_data": false,
00:25:10.229  "seek_hole": false,
00:25:10.229  "unmap": false,
00:25:10.229  "write": true,
00:25:10.229  "write_zeroes": true,
00:25:10.229  "zcopy": false,
00:25:10.229  "zone_append": false,
00:25:10.229  "zone_management": false
00:25:10.229  },
00:25:10.229  "uuid": "5a8acf4d-7ad2-4f8a-8193-c32db2a22870",
00:25:10.229  "zoned": false
00:25:10.229  }
00:25:10.229  ]
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:10.229    19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.YojrPnQ0ep
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ:
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.YojrPnQ0ep
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.YojrPnQ0ep
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 --secure-channel
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:25:10.229  [2024-12-13 19:09:41.865200] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:25:10.229  [2024-12-13 19:09:41.865359] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 ***
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:25:10.229  [2024-12-13 19:09:41.885191] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:25:10.229  nvme0n1
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:10.229   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:25:10.229  [
00:25:10.229  {
00:25:10.229  "aliases": [
00:25:10.229  "5a8acf4d-7ad2-4f8a-8193-c32db2a22870"
00:25:10.229  ],
00:25:10.229  "assigned_rate_limits": {
00:25:10.229  "r_mbytes_per_sec": 0,
00:25:10.229  "rw_ios_per_sec": 0,
00:25:10.229  "rw_mbytes_per_sec": 0,
00:25:10.229  "w_mbytes_per_sec": 0
00:25:10.229  },
00:25:10.229  "block_size": 512,
00:25:10.229  "claimed": false,
00:25:10.229  "driver_specific": {
00:25:10.229  "mp_policy": "active_passive",
00:25:10.229  "nvme": [
00:25:10.229  {
00:25:10.229  "ctrlr_data": {
00:25:10.229  "ana_reporting": false,
00:25:10.229  "cntlid": 3,
00:25:10.229  "firmware_revision": "25.01",
00:25:10.229  "model_number": "SPDK bdev Controller",
00:25:10.229  "multi_ctrlr": true,
00:25:10.229  "oacs": {
00:25:10.229  "firmware": 0,
00:25:10.229  "format": 0,
00:25:10.229  "ns_manage": 0,
00:25:10.229  "security": 0
00:25:10.229  },
00:25:10.229  "serial_number": "00000000000000000000",
00:25:10.229  "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:25:10.229  "vendor_id": "0x8086"
00:25:10.229  },
00:25:10.229  "ns_data": {
00:25:10.229  "can_share": true,
00:25:10.229  "id": 1
00:25:10.229  },
00:25:10.229  "trid": {
00:25:10.229  "adrfam": "IPv4",
00:25:10.229  "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:25:10.229  "traddr": "10.0.0.3",
00:25:10.229  "trsvcid": "4421",
00:25:10.229  "trtype": "TCP"
00:25:10.229  },
00:25:10.229  "vs": {
00:25:10.229  "nvme_version": "1.3"
00:25:10.229  }
00:25:10.229  }
00:25:10.229  ]
00:25:10.229  },
00:25:10.229  "memory_domains": [
00:25:10.229  {
00:25:10.229  "dma_device_id": "system",
00:25:10.229  "dma_device_type": 1
00:25:10.229  }
00:25:10.229  ],
00:25:10.229  "name": "nvme0n1",
00:25:10.229  "num_blocks": 2097152,
00:25:10.229  "numa_id": -1,
00:25:10.229  "product_name": "NVMe disk",
00:25:10.229  "supported_io_types": {
00:25:10.229  "abort": true,
00:25:10.229  "compare": true,
00:25:10.229  "compare_and_write": true,
00:25:10.229  "copy": true,
00:25:10.229  "flush": true,
00:25:10.229  "get_zone_info": false,
00:25:10.229  "nvme_admin": true,
00:25:10.229  "nvme_io": true,
00:25:10.230  "nvme_io_md": false,
00:25:10.230  "nvme_iov_md": false,
00:25:10.230  "read": true,
00:25:10.230  "reset": true,
00:25:10.230  "seek_data": false,
00:25:10.230  "seek_hole": false,
00:25:10.230  "unmap": false,
00:25:10.230  "write": true,
00:25:10.230  "write_zeroes": true,
00:25:10.230  "zcopy": false,
00:25:10.230  "zone_append": false,
00:25:10.230  "zone_management": false
00:25:10.230  },
00:25:10.230  "uuid": "5a8acf4d-7ad2-4f8a-8193-c32db2a22870",
00:25:10.230  "zoned": false
00:25:10.230  }
00:25:10.230  ]
00:25:10.230   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:10.230   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:25:10.230   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:10.230   19:09:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:25:10.230   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:10.230   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.YojrPnQ0ep
00:25:10.230   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT
00:25:10.230   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini
00:25:10.230   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup
00:25:10.230   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync
00:25:10.489   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:25:10.489   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e
00:25:10.489   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20}
00:25:10.489   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:25:10.489  rmmod nvme_tcp
00:25:10.489  rmmod nvme_fabrics
00:25:10.489  rmmod nvme_keyring
00:25:10.489   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:25:10.489   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e
00:25:10.489   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0
00:25:10.489   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 108024 ']'
00:25:10.489   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 108024
00:25:10.489   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 108024 ']'
00:25:10.489   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 108024
00:25:10.489    19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname
00:25:10.489   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:25:10.489    19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108024
00:25:10.489  killing process with pid 108024
00:25:10.489   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:25:10.489   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:25:10.489   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108024'
00:25:10.489   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 108024
00:25:10.489   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 108024
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # remove_spdk_ns
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:10.748   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:25:10.748    19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:25:11.007   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # return 0
00:25:11.007  
00:25:11.007  real	0m2.323s
00:25:11.007  user	0m1.773s
00:25:11.007  sys	0m0.693s
00:25:11.007   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:25:11.007  ************************************
00:25:11.007  END TEST nvmf_async_init
00:25:11.007  ************************************
00:25:11.007   19:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:25:11.007   19:09:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp
00:25:11.007   19:09:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:25:11.007   19:09:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:25:11.007   19:09:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:25:11.007  ************************************
00:25:11.007  START TEST dma
00:25:11.007  ************************************
00:25:11.007   19:09:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp
00:25:11.007  * Looking for test storage...
00:25:11.007  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:25:11.007    19:09:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:25:11.007     19:09:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version
00:25:11.007     19:09:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:25:11.007    19:09:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:25:11.007    19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:25:11.007    19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l
00:25:11.007    19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l
00:25:11.007    19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-:
00:25:11.007    19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1
00:25:11.007    19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-:
00:25:11.007    19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2
00:25:11.007    19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<'
00:25:11.007    19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2
00:25:11.007    19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1
00:25:11.007    19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:25:11.007    19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in
00:25:11.007    19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1
00:25:11.007    19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 ))
00:25:11.008    19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:11.008     19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1
00:25:11.008     19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1
00:25:11.008     19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:11.008     19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1
00:25:11.008    19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1
00:25:11.008     19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2
00:25:11.008     19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2
00:25:11.008     19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:11.008     19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2
00:25:11.008    19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2
00:25:11.008    19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:25:11.008    19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:25:11.008    19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0
00:25:11.008    19:09:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:11.008    19:09:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:25:11.008  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:11.008  		--rc genhtml_branch_coverage=1
00:25:11.008  		--rc genhtml_function_coverage=1
00:25:11.008  		--rc genhtml_legend=1
00:25:11.008  		--rc geninfo_all_blocks=1
00:25:11.008  		--rc geninfo_unexecuted_blocks=1
00:25:11.008  		
00:25:11.008  		'
00:25:11.008    19:09:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:25:11.008  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:11.008  		--rc genhtml_branch_coverage=1
00:25:11.008  		--rc genhtml_function_coverage=1
00:25:11.008  		--rc genhtml_legend=1
00:25:11.008  		--rc geninfo_all_blocks=1
00:25:11.008  		--rc geninfo_unexecuted_blocks=1
00:25:11.008  		
00:25:11.008  		'
00:25:11.008    19:09:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:25:11.008  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:11.008  		--rc genhtml_branch_coverage=1
00:25:11.008  		--rc genhtml_function_coverage=1
00:25:11.008  		--rc genhtml_legend=1
00:25:11.008  		--rc geninfo_all_blocks=1
00:25:11.008  		--rc geninfo_unexecuted_blocks=1
00:25:11.008  		
00:25:11.008  		'
00:25:11.008    19:09:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:25:11.008  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:11.008  		--rc genhtml_branch_coverage=1
00:25:11.008  		--rc genhtml_function_coverage=1
00:25:11.008  		--rc genhtml_legend=1
00:25:11.008  		--rc geninfo_all_blocks=1
00:25:11.008  		--rc geninfo_unexecuted_blocks=1
00:25:11.008  		
00:25:11.008  		'
00:25:11.008   19:09:42 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:25:11.008     19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s
00:25:11.008    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:25:11.008    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:25:11.008    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:25:11.008    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:25:11.008    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:25:11.008    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:25:11.008    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:25:11.008    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:25:11.008    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:25:11.008     19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:25:11.267    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:25:11.267    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:25:11.267    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:25:11.267    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:25:11.267    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:25:11.267    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:25:11.267    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:25:11.267     19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob
00:25:11.267     19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:11.267     19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:11.267     19:09:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:11.267      19:09:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:11.267      19:09:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:11.267      19:09:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:11.267      19:09:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH
00:25:11.267      19:09:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:11.267    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0
00:25:11.267    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:25:11.267    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:25:11.267    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:25:11.267    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:25:11.267    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:25:11.267    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:25:11.267  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:25:11.267    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:25:11.267    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:25:11.267    19:09:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0
00:25:11.268   19:09:42 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']'
00:25:11.268   19:09:42 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0
00:25:11.268  
00:25:11.268  real	0m0.219s
00:25:11.268  user	0m0.131s
00:25:11.268  sys	0m0.092s
00:25:11.268  ************************************
00:25:11.268  END TEST dma
00:25:11.268  ************************************
00:25:11.268   19:09:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:25:11.268   19:09:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x
00:25:11.268   19:09:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp
00:25:11.268   19:09:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:25:11.268   19:09:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:25:11.268   19:09:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:25:11.268  ************************************
00:25:11.268  START TEST nvmf_identify
00:25:11.268  ************************************
00:25:11.268   19:09:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp
00:25:11.268  * Looking for test storage...
00:25:11.268  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:25:11.268    19:09:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:25:11.268     19:09:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version
00:25:11.268     19:09:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:25:11.268    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:25:11.268    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:25:11.268    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l
00:25:11.268    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l
00:25:11.268    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-:
00:25:11.268    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1
00:25:11.268    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-:
00:25:11.268    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2
00:25:11.268    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<'
00:25:11.268    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2
00:25:11.268    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1
00:25:11.268    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:25:11.268    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in
00:25:11.268    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1
00:25:11.268    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 ))
00:25:11.268    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:11.268     19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1
00:25:11.268     19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1
00:25:11.268     19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:11.268     19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1
00:25:11.268    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1
00:25:11.268     19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2
00:25:11.268     19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2
00:25:11.268     19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:11.268     19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:25:11.528  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:11.528  		--rc genhtml_branch_coverage=1
00:25:11.528  		--rc genhtml_function_coverage=1
00:25:11.528  		--rc genhtml_legend=1
00:25:11.528  		--rc geninfo_all_blocks=1
00:25:11.528  		--rc geninfo_unexecuted_blocks=1
00:25:11.528  		
00:25:11.528  		'
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:25:11.528  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:11.528  		--rc genhtml_branch_coverage=1
00:25:11.528  		--rc genhtml_function_coverage=1
00:25:11.528  		--rc genhtml_legend=1
00:25:11.528  		--rc geninfo_all_blocks=1
00:25:11.528  		--rc geninfo_unexecuted_blocks=1
00:25:11.528  		
00:25:11.528  		'
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:25:11.528  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:11.528  		--rc genhtml_branch_coverage=1
00:25:11.528  		--rc genhtml_function_coverage=1
00:25:11.528  		--rc genhtml_legend=1
00:25:11.528  		--rc geninfo_all_blocks=1
00:25:11.528  		--rc geninfo_unexecuted_blocks=1
00:25:11.528  		
00:25:11.528  		'
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:25:11.528  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:11.528  		--rc genhtml_branch_coverage=1
00:25:11.528  		--rc genhtml_function_coverage=1
00:25:11.528  		--rc genhtml_legend=1
00:25:11.528  		--rc geninfo_all_blocks=1
00:25:11.528  		--rc geninfo_unexecuted_blocks=1
00:25:11.528  		
00:25:11.528  		'
00:25:11.528   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:25:11.528     19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:25:11.528     19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:25:11.528     19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob
00:25:11.528     19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:11.528     19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:11.528     19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:11.528      19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:11.528      19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:11.528      19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:11.528      19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH
00:25:11.528      19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:25:11.528  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:25:11.528    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0
00:25:11.528   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64
00:25:11.528   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:25:11.528   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit
00:25:11.528   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:25:11.528   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:25:11.529    19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:25:11.529  Cannot find device "nvmf_init_br"
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:25:11.529  Cannot find device "nvmf_init_br2"
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:25:11.529  Cannot find device "nvmf_tgt_br"
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:25:11.529  Cannot find device "nvmf_tgt_br2"
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:25:11.529  Cannot find device "nvmf_init_br"
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:25:11.529  Cannot find device "nvmf_init_br2"
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:25:11.529  Cannot find device "nvmf_tgt_br"
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:25:11.529  Cannot find device "nvmf_tgt_br2"
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:25:11.529  Cannot find device "nvmf_br"
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:25:11.529  Cannot find device "nvmf_init_if"
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:25:11.529  Cannot find device "nvmf_init_if2"
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:25:11.529  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:25:11.529  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:25:11.529   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:25:11.788   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:25:11.788   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:25:11.788   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:25:11.788   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:25:11.788   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:25:11.788   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:25:11.788   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:25:11.788   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:25:11.788   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:25:11.788   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:25:11.788   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:25:11.788   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:25:11.788   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:25:11.788   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:25:11.788   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:25:11.788   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:25:11.788   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:25:11.788   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:25:11.788   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:25:11.789  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:25:11.789  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms
00:25:11.789  
00:25:11.789  --- 10.0.0.3 ping statistics ---
00:25:11.789  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:11.789  rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:25:11.789  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:25:11.789  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms
00:25:11.789  
00:25:11.789  --- 10.0.0.4 ping statistics ---
00:25:11.789  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:11.789  rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:25:11.789  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:25:11.789  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms
00:25:11.789  
00:25:11.789  --- 10.0.0.1 ping statistics ---
00:25:11.789  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:11.789  rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:25:11.789  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:25:11.789  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms
00:25:11.789  
00:25:11.789  --- 10.0.0.2 ping statistics ---
00:25:11.789  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:11.789  rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:25:11.789  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=108334
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 108334
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 108334 ']'
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable
00:25:11.789   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:25:11.789  [2024-12-13 19:09:43.600855] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:25:11.789  [2024-12-13 19:09:43.601123] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:25:12.048  [2024-12-13 19:09:43.757514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:25:12.048  [2024-12-13 19:09:43.797743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:25:12.048  [2024-12-13 19:09:43.798046] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:25:12.048  [2024-12-13 19:09:43.798271] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:25:12.048  [2024-12-13 19:09:43.798292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:25:12.048  [2024-12-13 19:09:43.798302] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:25:12.048  [2024-12-13 19:09:43.799527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:25:12.048  [2024-12-13 19:09:43.799778] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:25:12.048  [2024-12-13 19:09:43.800357] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:25:12.048  [2024-12-13 19:09:43.800368] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:25:12.307   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:25:12.307   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0
00:25:12.307   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:25:12.307   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:12.307   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:25:12.307  [2024-12-13 19:09:43.944411] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:25:12.307   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:12.307   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt
00:25:12.307   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable
00:25:12.307   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:25:12.307   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:25:12.307   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:12.307   19:09:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:25:12.307  Malloc0
00:25:12.307   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:12.307   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:25:12.307   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:12.307   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:25:12.307   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:12.307   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789
00:25:12.307   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:12.307   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:25:12.307   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:12.307   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:25:12.307   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:12.307   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:25:12.307  [2024-12-13 19:09:44.053784] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:25:12.307   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:12.307   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420
00:25:12.307   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:12.307   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:25:12.307   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:12.307   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems
00:25:12.307   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:12.307   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:25:12.307  [
00:25:12.307  {
00:25:12.307  "allow_any_host": true,
00:25:12.307  "hosts": [],
00:25:12.307  "listen_addresses": [
00:25:12.307  {
00:25:12.307  "adrfam": "IPv4",
00:25:12.307  "traddr": "10.0.0.3",
00:25:12.307  "trsvcid": "4420",
00:25:12.307  "trtype": "TCP"
00:25:12.307  }
00:25:12.307  ],
00:25:12.307  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:25:12.307  "subtype": "Discovery"
00:25:12.307  },
00:25:12.307  {
00:25:12.307  "allow_any_host": true,
00:25:12.307  "hosts": [],
00:25:12.307  "listen_addresses": [
00:25:12.307  {
00:25:12.307  "adrfam": "IPv4",
00:25:12.307  "traddr": "10.0.0.3",
00:25:12.307  "trsvcid": "4420",
00:25:12.307  "trtype": "TCP"
00:25:12.307  }
00:25:12.307  ],
00:25:12.307  "max_cntlid": 65519,
00:25:12.307  "max_namespaces": 32,
00:25:12.307  "min_cntlid": 1,
00:25:12.307  "model_number": "SPDK bdev Controller",
00:25:12.307  "namespaces": [
00:25:12.307  {
00:25:12.307  "bdev_name": "Malloc0",
00:25:12.307  "eui64": "ABCDEF0123456789",
00:25:12.307  "name": "Malloc0",
00:25:12.307  "nguid": "ABCDEF0123456789ABCDEF0123456789",
00:25:12.307  "nsid": 1,
00:25:12.307  "uuid": "3dbfc942-3f85-4662-96a5-2b62e0a44ed3"
00:25:12.307  }
00:25:12.307  ],
00:25:12.307  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:25:12.307  "serial_number": "SPDK00000000000001",
00:25:12.307  "subtype": "NVMe"
00:25:12.307  }
00:25:12.307  ]
00:25:12.307   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:12.307   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r '        trtype:tcp         adrfam:IPv4         traddr:10.0.0.3         trsvcid:4420         subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all
00:25:12.307  [2024-12-13 19:09:44.114557] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:25:12.307  [2024-12-13 19:09:44.114772] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108375 ]
00:25:12.569  [2024-12-13 19:09:44.274108] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout)
00:25:12.569  [2024-12-13 19:09:44.274183] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2
00:25:12.569  [2024-12-13 19:09:44.274189] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420
00:25:12.569  [2024-12-13 19:09:44.274200] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null)
00:25:12.569  [2024-12-13 19:09:44.274209] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix
00:25:12.569  [2024-12-13 19:09:44.274553] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout)
00:25:12.569  [2024-12-13 19:09:44.274625] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8859b0 0
00:25:12.569  [2024-12-13 19:09:44.281241] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1
00:25:12.569  [2024-12-13 19:09:44.281268] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1
00:25:12.569  [2024-12-13 19:09:44.281290] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0
00:25:12.569  [2024-12-13 19:09:44.281294] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0
00:25:12.569  [2024-12-13 19:09:44.281325] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.569  [2024-12-13 19:09:44.281332] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.569  [2024-12-13 19:09:44.281337] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8859b0)
00:25:12.569  [2024-12-13 19:09:44.281350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400
00:25:12.569  [2024-12-13 19:09:44.281382] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cbc00, cid 0, qid 0
00:25:12.569  [2024-12-13 19:09:44.291282] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.569  [2024-12-13 19:09:44.291299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.569  [2024-12-13 19:09:44.291304] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.569  [2024-12-13 19:09:44.291308] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cbc00) on tqpair=0x8859b0
00:25:12.569  [2024-12-13 19:09:44.291321] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001
00:25:12.569  [2024-12-13 19:09:44.291329] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout)
00:25:12.569  [2024-12-13 19:09:44.291334] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout)
00:25:12.569  [2024-12-13 19:09:44.291349] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.569  [2024-12-13 19:09:44.291354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.569  [2024-12-13 19:09:44.291358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8859b0)
00:25:12.569  [2024-12-13 19:09:44.291366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.569  [2024-12-13 19:09:44.291392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cbc00, cid 0, qid 0
00:25:12.569  [2024-12-13 19:09:44.291457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.569  [2024-12-13 19:09:44.291464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.569  [2024-12-13 19:09:44.291467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.569  [2024-12-13 19:09:44.291471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cbc00) on tqpair=0x8859b0
00:25:12.569  [2024-12-13 19:09:44.291492] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout)
00:25:12.569  [2024-12-13 19:09:44.291499] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout)
00:25:12.569  [2024-12-13 19:09:44.291523] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.569  [2024-12-13 19:09:44.291527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.569  [2024-12-13 19:09:44.291531] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8859b0)
00:25:12.569  [2024-12-13 19:09:44.291538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.569  [2024-12-13 19:09:44.291557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cbc00, cid 0, qid 0
00:25:12.569  [2024-12-13 19:09:44.291606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.569  [2024-12-13 19:09:44.291613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.569  [2024-12-13 19:09:44.291617] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.569  [2024-12-13 19:09:44.291621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cbc00) on tqpair=0x8859b0
00:25:12.569  [2024-12-13 19:09:44.291627] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout)
00:25:12.569  [2024-12-13 19:09:44.291635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms)
00:25:12.569  [2024-12-13 19:09:44.291642] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.569  [2024-12-13 19:09:44.291646] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.569  [2024-12-13 19:09:44.291650] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8859b0)
00:25:12.569  [2024-12-13 19:09:44.291657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.569  [2024-12-13 19:09:44.291674] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cbc00, cid 0, qid 0
00:25:12.569  [2024-12-13 19:09:44.291722] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.569  [2024-12-13 19:09:44.291728] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.569  [2024-12-13 19:09:44.291732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.569  [2024-12-13 19:09:44.291736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cbc00) on tqpair=0x8859b0
00:25:12.569  [2024-12-13 19:09:44.291742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms)
00:25:12.569  [2024-12-13 19:09:44.291751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.569  [2024-12-13 19:09:44.291756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.569  [2024-12-13 19:09:44.291760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8859b0)
00:25:12.569  [2024-12-13 19:09:44.291767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.569  [2024-12-13 19:09:44.291783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cbc00, cid 0, qid 0
00:25:12.569  [2024-12-13 19:09:44.291833] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.569  [2024-12-13 19:09:44.291840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.569  [2024-12-13 19:09:44.291843] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.569  [2024-12-13 19:09:44.291847] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cbc00) on tqpair=0x8859b0
00:25:12.569  [2024-12-13 19:09:44.291852] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0
00:25:12.569  [2024-12-13 19:09:44.291858] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms)
00:25:12.569  [2024-12-13 19:09:44.291866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms)
00:25:12.569  [2024-12-13 19:09:44.291976] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1
00:25:12.569  [2024-12-13 19:09:44.291982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms)
00:25:12.569  [2024-12-13 19:09:44.291991] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.569  [2024-12-13 19:09:44.291996] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.569  [2024-12-13 19:09:44.292000] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8859b0)
00:25:12.569  [2024-12-13 19:09:44.292007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.569  [2024-12-13 19:09:44.292025] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cbc00, cid 0, qid 0
00:25:12.569  [2024-12-13 19:09:44.292080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.569  [2024-12-13 19:09:44.292087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.569  [2024-12-13 19:09:44.292090] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.569  [2024-12-13 19:09:44.292095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cbc00) on tqpair=0x8859b0
00:25:12.569  [2024-12-13 19:09:44.292100] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms)
00:25:12.569  [2024-12-13 19:09:44.292110] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.569  [2024-12-13 19:09:44.292114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.569  [2024-12-13 19:09:44.292118] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8859b0)
00:25:12.569  [2024-12-13 19:09:44.292125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.569  [2024-12-13 19:09:44.292142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cbc00, cid 0, qid 0
00:25:12.569  [2024-12-13 19:09:44.292193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.569  [2024-12-13 19:09:44.292200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.569  [2024-12-13 19:09:44.292204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.569  [2024-12-13 19:09:44.292208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cbc00) on tqpair=0x8859b0
00:25:12.569  [2024-12-13 19:09:44.292213] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready
00:25:12.569  [2024-12-13 19:09:44.292218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms)
00:25:12.569  [2024-12-13 19:09:44.292226] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout)
00:25:12.569  [2024-12-13 19:09:44.292235] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms)
00:25:12.569  [2024-12-13 19:09:44.292245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.569  [2024-12-13 19:09:44.292261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8859b0)
00:25:12.569  [2024-12-13 19:09:44.292271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.570  [2024-12-13 19:09:44.292291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cbc00, cid 0, qid 0
00:25:12.570  [2024-12-13 19:09:44.292389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:25:12.570  [2024-12-13 19:09:44.292397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:25:12.570  [2024-12-13 19:09:44.292401] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.292405] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8859b0): datao=0, datal=4096, cccid=0
00:25:12.570  [2024-12-13 19:09:44.292410] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8cbc00) on tqpair(0x8859b0): expected_datao=0, payload_size=4096
00:25:12.570  [2024-12-13 19:09:44.292415] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.292423] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.292428] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.292437] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.570  [2024-12-13 19:09:44.292443] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.570  [2024-12-13 19:09:44.292446] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.292450] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cbc00) on tqpair=0x8859b0
00:25:12.570  [2024-12-13 19:09:44.292459] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295
00:25:12.570  [2024-12-13 19:09:44.292464] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072
00:25:12.570  [2024-12-13 19:09:44.292468] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001
00:25:12.570  [2024-12-13 19:09:44.292474] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16
00:25:12.570  [2024-12-13 19:09:44.292478] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1
00:25:12.570  [2024-12-13 19:09:44.292483] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms)
00:25:12.570  [2024-12-13 19:09:44.292497] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms)
00:25:12.570  [2024-12-13 19:09:44.292508] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.292512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.292516] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8859b0)
00:25:12.570  [2024-12-13 19:09:44.292523] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0
00:25:12.570  [2024-12-13 19:09:44.292544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cbc00, cid 0, qid 0
00:25:12.570  [2024-12-13 19:09:44.292606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.570  [2024-12-13 19:09:44.292613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.570  [2024-12-13 19:09:44.292616] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.292620] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cbc00) on tqpair=0x8859b0
00:25:12.570  [2024-12-13 19:09:44.292628] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.292632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.292636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8859b0)
00:25:12.570  [2024-12-13 19:09:44.292643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:25:12.570  [2024-12-13 19:09:44.292649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.292653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.292657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8859b0)
00:25:12.570  [2024-12-13 19:09:44.292663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:25:12.570  [2024-12-13 19:09:44.292669] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.292672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.292676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8859b0)
00:25:12.570  [2024-12-13 19:09:44.292682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:25:12.570  [2024-12-13 19:09:44.292688] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.292691] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.292695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8859b0)
00:25:12.570  [2024-12-13 19:09:44.292701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:25:12.570  [2024-12-13 19:09:44.292706] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms)
00:25:12.570  [2024-12-13 19:09:44.292718] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms)
00:25:12.570  [2024-12-13 19:09:44.292725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.292729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8859b0)
00:25:12.570  [2024-12-13 19:09:44.292736] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.570  [2024-12-13 19:09:44.292755] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cbc00, cid 0, qid 0
00:25:12.570  [2024-12-13 19:09:44.292762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cbd80, cid 1, qid 0
00:25:12.570  [2024-12-13 19:09:44.292767] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cbf00, cid 2, qid 0
00:25:12.570  [2024-12-13 19:09:44.292772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cc080, cid 3, qid 0
00:25:12.570  [2024-12-13 19:09:44.292776] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cc200, cid 4, qid 0
00:25:12.570  [2024-12-13 19:09:44.292866] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.570  [2024-12-13 19:09:44.292873] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.570  [2024-12-13 19:09:44.292877] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.292881] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cc200) on tqpair=0x8859b0
00:25:12.570  [2024-12-13 19:09:44.292886] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us
00:25:12.570  [2024-12-13 19:09:44.292891] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout)
00:25:12.570  [2024-12-13 19:09:44.292902] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.292906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8859b0)
00:25:12.570  [2024-12-13 19:09:44.292913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.570  [2024-12-13 19:09:44.292931] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cc200, cid 4, qid 0
00:25:12.570  [2024-12-13 19:09:44.292992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:25:12.570  [2024-12-13 19:09:44.292999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:25:12.570  [2024-12-13 19:09:44.293003] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.293006] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8859b0): datao=0, datal=4096, cccid=4
00:25:12.570  [2024-12-13 19:09:44.293011] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8cc200) on tqpair(0x8859b0): expected_datao=0, payload_size=4096
00:25:12.570  [2024-12-13 19:09:44.293015] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.293022] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.293026] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.293034] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.570  [2024-12-13 19:09:44.293040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.570  [2024-12-13 19:09:44.293044] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.293048] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cc200) on tqpair=0x8859b0
00:25:12.570  [2024-12-13 19:09:44.293061] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state
00:25:12.570  [2024-12-13 19:09:44.293088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.293094] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8859b0)
00:25:12.570  [2024-12-13 19:09:44.293101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.570  [2024-12-13 19:09:44.293108] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.293112] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.293116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8859b0)
00:25:12.570  [2024-12-13 19:09:44.293122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 
00:25:12.570  [2024-12-13 19:09:44.293146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cc200, cid 4, qid 0
00:25:12.570  [2024-12-13 19:09:44.293153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cc380, cid 5, qid 0
00:25:12.570  [2024-12-13 19:09:44.293271] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:25:12.570  [2024-12-13 19:09:44.293280] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:25:12.570  [2024-12-13 19:09:44.293283] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.293288] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8859b0): datao=0, datal=1024, cccid=4
00:25:12.570  [2024-12-13 19:09:44.293292] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8cc200) on tqpair(0x8859b0): expected_datao=0, payload_size=1024
00:25:12.570  [2024-12-13 19:09:44.293297] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.293303] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.293307] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.293313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.570  [2024-12-13 19:09:44.293319] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.570  [2024-12-13 19:09:44.293322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.293326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cc380) on tqpair=0x8859b0
00:25:12.570  [2024-12-13 19:09:44.338284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.570  [2024-12-13 19:09:44.338305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.570  [2024-12-13 19:09:44.338326] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.570  [2024-12-13 19:09:44.338330] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cc200) on tqpair=0x8859b0
00:25:12.570  [2024-12-13 19:09:44.338344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.571  [2024-12-13 19:09:44.338349] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8859b0)
00:25:12.571  [2024-12-13 19:09:44.338357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.571  [2024-12-13 19:09:44.338387] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cc200, cid 4, qid 0
00:25:12.571  [2024-12-13 19:09:44.338458] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:25:12.571  [2024-12-13 19:09:44.338464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:25:12.571  [2024-12-13 19:09:44.338468] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:25:12.571  [2024-12-13 19:09:44.338472] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8859b0): datao=0, datal=3072, cccid=4
00:25:12.571  [2024-12-13 19:09:44.338476] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8cc200) on tqpair(0x8859b0): expected_datao=0, payload_size=3072
00:25:12.571  [2024-12-13 19:09:44.338480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.571  [2024-12-13 19:09:44.338487] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:25:12.571  [2024-12-13 19:09:44.338491] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:25:12.571  [2024-12-13 19:09:44.338499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.571  [2024-12-13 19:09:44.338505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.571  [2024-12-13 19:09:44.338509] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.571  [2024-12-13 19:09:44.338512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cc200) on tqpair=0x8859b0
00:25:12.571  [2024-12-13 19:09:44.338537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.571  [2024-12-13 19:09:44.338542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8859b0)
00:25:12.571  [2024-12-13 19:09:44.338565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.571  [2024-12-13 19:09:44.338597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cc200, cid 4, qid 0
00:25:12.571  [2024-12-13 19:09:44.338667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:25:12.571  [2024-12-13 19:09:44.338674] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:25:12.571  [2024-12-13 19:09:44.338678] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:25:12.571  [2024-12-13 19:09:44.338682] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8859b0): datao=0, datal=8, cccid=4
00:25:12.571  [2024-12-13 19:09:44.338687] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8cc200) on tqpair(0x8859b0): expected_datao=0, payload_size=8
00:25:12.571  [2024-12-13 19:09:44.338691] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.571  [2024-12-13 19:09:44.338698] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:25:12.571  [2024-12-13 19:09:44.338702] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:25:12.571  [2024-12-13 19:09:44.380282] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.571  [2024-12-13 19:09:44.380306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.571  [2024-12-13 19:09:44.380326] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.571  [2024-12-13 19:09:44.380331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cc200) on tqpair=0x8859b0
00:25:12.571  =====================================================
00:25:12.571  NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery
00:25:12.571  =====================================================
00:25:12.571  Controller Capabilities/Features
00:25:12.571  ================================
00:25:12.571  Vendor ID:                             0000
00:25:12.571  Subsystem Vendor ID:                   0000
00:25:12.571  Serial Number:                         ....................
00:25:12.571  Model Number:                          ........................................
00:25:12.571  Firmware Version:                      25.01
00:25:12.571  Recommended Arb Burst:                 0
00:25:12.571  IEEE OUI Identifier:                   00 00 00
00:25:12.571  Multi-path I/O
00:25:12.571    May have multiple subsystem ports:   No
00:25:12.571    May have multiple controllers:       No
00:25:12.571    Associated with SR-IOV VF:           No
00:25:12.571  Max Data Transfer Size:                131072
00:25:12.571  Max Number of Namespaces:              0
00:25:12.571  Max Number of I/O Queues:              1024
00:25:12.571  NVMe Specification Version (VS):       1.3
00:25:12.571  NVMe Specification Version (Identify): 1.3
00:25:12.571  Maximum Queue Entries:                 128
00:25:12.571  Contiguous Queues Required:            Yes
00:25:12.571  Arbitration Mechanisms Supported
00:25:12.571    Weighted Round Robin:                Not Supported
00:25:12.571    Vendor Specific:                     Not Supported
00:25:12.571  Reset Timeout:                         15000 ms
00:25:12.571  Doorbell Stride:                       4 bytes
00:25:12.571  NVM Subsystem Reset:                   Not Supported
00:25:12.571  Command Sets Supported
00:25:12.571    NVM Command Set:                     Supported
00:25:12.571  Boot Partition:                        Not Supported
00:25:12.571  Memory Page Size Minimum:              4096 bytes
00:25:12.571  Memory Page Size Maximum:              4096 bytes
00:25:12.571  Persistent Memory Region:              Not Supported
00:25:12.571  Optional Asynchronous Events Supported
00:25:12.571    Namespace Attribute Notices:         Not Supported
00:25:12.571    Firmware Activation Notices:         Not Supported
00:25:12.571    ANA Change Notices:                  Not Supported
00:25:12.571    PLE Aggregate Log Change Notices:    Not Supported
00:25:12.571    LBA Status Info Alert Notices:       Not Supported
00:25:12.571    EGE Aggregate Log Change Notices:    Not Supported
00:25:12.571    Normal NVM Subsystem Shutdown event: Not Supported
00:25:12.571    Zone Descriptor Change Notices:      Not Supported
00:25:12.571    Discovery Log Change Notices:        Supported
00:25:12.571  Controller Attributes
00:25:12.571    128-bit Host Identifier:             Not Supported
00:25:12.571    Non-Operational Permissive Mode:     Not Supported
00:25:12.571    NVM Sets:                            Not Supported
00:25:12.571    Read Recovery Levels:                Not Supported
00:25:12.571    Endurance Groups:                    Not Supported
00:25:12.571    Predictable Latency Mode:            Not Supported
00:25:12.571    Traffic Based Keep ALive:            Not Supported
00:25:12.571    Namespace Granularity:               Not Supported
00:25:12.571    SQ Associations:                     Not Supported
00:25:12.571    UUID List:                           Not Supported
00:25:12.571    Multi-Domain Subsystem:              Not Supported
00:25:12.571    Fixed Capacity Management:           Not Supported
00:25:12.571    Variable Capacity Management:        Not Supported
00:25:12.571    Delete Endurance Group:              Not Supported
00:25:12.571    Delete NVM Set:                      Not Supported
00:25:12.571    Extended LBA Formats Supported:      Not Supported
00:25:12.571    Flexible Data Placement Supported:   Not Supported
00:25:12.571  
00:25:12.571  Controller Memory Buffer Support
00:25:12.571  ================================
00:25:12.571  Supported:                             No
00:25:12.571  
00:25:12.571  Persistent Memory Region Support
00:25:12.571  ================================
00:25:12.571  Supported:                             No
00:25:12.571  
00:25:12.571  Admin Command Set Attributes
00:25:12.571  ============================
00:25:12.571  Security Send/Receive:                 Not Supported
00:25:12.571  Format NVM:                            Not Supported
00:25:12.571  Firmware Activate/Download:            Not Supported
00:25:12.571  Namespace Management:                  Not Supported
00:25:12.571  Device Self-Test:                      Not Supported
00:25:12.571  Directives:                            Not Supported
00:25:12.571  NVMe-MI:                               Not Supported
00:25:12.571  Virtualization Management:             Not Supported
00:25:12.571  Doorbell Buffer Config:                Not Supported
00:25:12.571  Get LBA Status Capability:             Not Supported
00:25:12.571  Command & Feature Lockdown Capability: Not Supported
00:25:12.571  Abort Command Limit:                   1
00:25:12.571  Async Event Request Limit:             4
00:25:12.571  Number of Firmware Slots:              N/A
00:25:12.571  Firmware Slot 1 Read-Only:             N/A
00:25:12.571  Firmware Activation Without Reset:     N/A
00:25:12.571  Multiple Update Detection Support:     N/A
00:25:12.571  Firmware Update Granularity:           No Information Provided
00:25:12.571  Per-Namespace SMART Log:               No
00:25:12.571  Asymmetric Namespace Access Log Page:  Not Supported
00:25:12.571  Subsystem NQN:                         nqn.2014-08.org.nvmexpress.discovery
00:25:12.571  Command Effects Log Page:              Not Supported
00:25:12.571  Get Log Page Extended Data:            Supported
00:25:12.571  Telemetry Log Pages:                   Not Supported
00:25:12.571  Persistent Event Log Pages:            Not Supported
00:25:12.571  Supported Log Pages Log Page:          May Support
00:25:12.571  Commands Supported & Effects Log Page: Not Supported
00:25:12.571  Feature Identifiers & Effects Log Page:May Support
00:25:12.571  NVMe-MI Commands & Effects Log Page:   May Support
00:25:12.571  Data Area 4 for Telemetry Log:         Not Supported
00:25:12.571  Error Log Page Entries Supported:      128
00:25:12.571  Keep Alive:                            Not Supported
00:25:12.571  
00:25:12.571  NVM Command Set Attributes
00:25:12.571  ==========================
00:25:12.571  Submission Queue Entry Size
00:25:12.571    Max:                       1
00:25:12.571    Min:                       1
00:25:12.571  Completion Queue Entry Size
00:25:12.571    Max:                       1
00:25:12.571    Min:                       1
00:25:12.571  Number of Namespaces:        0
00:25:12.571  Compare Command:             Not Supported
00:25:12.571  Write Uncorrectable Command: Not Supported
00:25:12.571  Dataset Management Command:  Not Supported
00:25:12.571  Write Zeroes Command:        Not Supported
00:25:12.571  Set Features Save Field:     Not Supported
00:25:12.571  Reservations:                Not Supported
00:25:12.571  Timestamp:                   Not Supported
00:25:12.571  Copy:                        Not Supported
00:25:12.571  Volatile Write Cache:        Not Present
00:25:12.571  Atomic Write Unit (Normal):  1
00:25:12.571  Atomic Write Unit (PFail):   1
00:25:12.571  Atomic Compare & Write Unit: 1
00:25:12.571  Fused Compare & Write:       Supported
00:25:12.571  Scatter-Gather List
00:25:12.571    SGL Command Set:           Supported
00:25:12.571    SGL Keyed:                 Supported
00:25:12.571    SGL Bit Bucket Descriptor: Not Supported
00:25:12.571    SGL Metadata Pointer:      Not Supported
00:25:12.571    Oversized SGL:             Not Supported
00:25:12.571    SGL Metadata Address:      Not Supported
00:25:12.571    SGL Offset:                Supported
00:25:12.571    Transport SGL Data Block:  Not Supported
00:25:12.571  Replay Protected Memory Block:  Not Supported
00:25:12.571  
00:25:12.571  Firmware Slot Information
00:25:12.571  =========================
00:25:12.571  Active slot:                 0
00:25:12.571  
00:25:12.572  
00:25:12.572  Error Log
00:25:12.572  =========
00:25:12.572  
00:25:12.572  Active Namespaces
00:25:12.572  =================
00:25:12.572  Discovery Log Page
00:25:12.572  ==================
00:25:12.572  Generation Counter:                    2
00:25:12.572  Number of Records:                     2
00:25:12.572  Record Format:                         0
00:25:12.572  
00:25:12.572  Discovery Log Entry 0
00:25:12.572  ----------------------
00:25:12.572  Transport Type:                        3 (TCP)
00:25:12.572  Address Family:                        1 (IPv4)
00:25:12.572  Subsystem Type:                        3 (Current Discovery Subsystem)
00:25:12.572  Entry Flags:
00:25:12.572    Duplicate Returned Information:			1
00:25:12.572    Explicit Persistent Connection Support for Discovery: 1
00:25:12.572  Transport Requirements:
00:25:12.572    Secure Channel:                      Not Required
00:25:12.572  Port ID:                               0 (0x0000)
00:25:12.572  Controller ID:                         65535 (0xffff)
00:25:12.572  Admin Max SQ Size:                     128
00:25:12.572  Transport Service Identifier:          4420                            
00:25:12.572  NVM Subsystem Qualified Name:          nqn.2014-08.org.nvmexpress.discovery
00:25:12.572  Transport Address:                     10.0.0.3                                                                                                                                                                                                                                                        
00:25:12.572  Discovery Log Entry 1
00:25:12.572  ----------------------
00:25:12.572  Transport Type:                        3 (TCP)
00:25:12.572  Address Family:                        1 (IPv4)
00:25:12.572  Subsystem Type:                        2 (NVM Subsystem)
00:25:12.572  Entry Flags:
00:25:12.572    Duplicate Returned Information:			0
00:25:12.572    Explicit Persistent Connection Support for Discovery: 0
00:25:12.572  Transport Requirements:
00:25:12.572    Secure Channel:                      Not Required
00:25:12.572  Port ID:                               0 (0x0000)
00:25:12.572  Controller ID:                         65535 (0xffff)
00:25:12.572  Admin Max SQ Size:                     128
00:25:12.572  Transport Service Identifier:          4420                            
00:25:12.572  NVM Subsystem Qualified Name:          nqn.2016-06.io.spdk:cnode1
00:25:12.572  Transport Address:                     10.0.0.3                              [2024-12-13 19:09:44.380430] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD
00:25:12.572  [2024-12-13 19:09:44.380443] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cbc00) on tqpair=0x8859b0
00:25:12.572  [2024-12-13 19:09:44.380450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:12.572  [2024-12-13 19:09:44.380456] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cbd80) on tqpair=0x8859b0
00:25:12.572  [2024-12-13 19:09:44.380461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:12.572  [2024-12-13 19:09:44.380466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cbf00) on tqpair=0x8859b0
00:25:12.572  [2024-12-13 19:09:44.380470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:12.572  [2024-12-13 19:09:44.380475] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cc080) on tqpair=0x8859b0
00:25:12.572  [2024-12-13 19:09:44.380479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:12.572  [2024-12-13 19:09:44.380489] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.572  [2024-12-13 19:09:44.380493] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.572  [2024-12-13 19:09:44.380497] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8859b0)
00:25:12.572  [2024-12-13 19:09:44.380505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.572  [2024-12-13 19:09:44.380552] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cc080, cid 3, qid 0
00:25:12.572  [2024-12-13 19:09:44.380617] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.572  [2024-12-13 19:09:44.380624] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.572  [2024-12-13 19:09:44.380627] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.572  [2024-12-13 19:09:44.380632] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cc080) on tqpair=0x8859b0
00:25:12.572  [2024-12-13 19:09:44.380640] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.572  [2024-12-13 19:09:44.380644] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.572  [2024-12-13 19:09:44.380648] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8859b0)
00:25:12.572  [2024-12-13 19:09:44.380655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.572  [2024-12-13 19:09:44.380678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cc080, cid 3, qid 0
00:25:12.572  [2024-12-13 19:09:44.380747] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.572  [2024-12-13 19:09:44.380754] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.572  [2024-12-13 19:09:44.380758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.572  [2024-12-13 19:09:44.380762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cc080) on tqpair=0x8859b0
00:25:12.572  [2024-12-13 19:09:44.380767] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us
00:25:12.572  [2024-12-13 19:09:44.380772] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms
00:25:12.572  [2024-12-13 19:09:44.380782] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.572  [2024-12-13 19:09:44.380786] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.572  [2024-12-13 19:09:44.380790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8859b0)
00:25:12.572  [2024-12-13 19:09:44.380797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.572  [2024-12-13 19:09:44.380815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cc080, cid 3, qid 0
00:25:12.572  [2024-12-13 19:09:44.380867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.572  [2024-12-13 19:09:44.380874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.572  [2024-12-13 19:09:44.380878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.572  [2024-12-13 19:09:44.380882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cc080) on tqpair=0x8859b0
00:25:12.572  [2024-12-13 19:09:44.380893] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.572  [2024-12-13 19:09:44.380897] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.572  [2024-12-13 19:09:44.380901] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8859b0)
00:25:12.572  [2024-12-13 19:09:44.380908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.572  [2024-12-13 19:09:44.380925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cc080, cid 3, qid 0
00:25:12.572  [2024-12-13 19:09:44.380974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.572  [2024-12-13 19:09:44.380981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.572  [2024-12-13 19:09:44.380984] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.572  [2024-12-13 19:09:44.380988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cc080) on tqpair=0x8859b0
00:25:12.572  [2024-12-13 19:09:44.380998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.572  [2024-12-13 19:09:44.381003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.572  [2024-12-13 19:09:44.381007] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8859b0)
00:25:12.572  [2024-12-13 19:09:44.381014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.572  [2024-12-13 19:09:44.381031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cc080, cid 3, qid 0
00:25:12.572  [2024-12-13 19:09:44.381088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.572  [2024-12-13 19:09:44.381095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.572  [2024-12-13 19:09:44.381099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.572  [2024-12-13 19:09:44.381103] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cc080) on tqpair=0x8859b0
00:25:12.572  [2024-12-13 19:09:44.381113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.572  [2024-12-13 19:09:44.381118] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.572  [2024-12-13 19:09:44.381122] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8859b0)
00:25:12.572  [2024-12-13 19:09:44.381129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.572  [2024-12-13 19:09:44.381146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cc080, cid 3, qid 0
00:25:12.572  [2024-12-13 19:09:44.381194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.572  [2024-12-13 19:09:44.381201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.572  [2024-12-13 19:09:44.381204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.572  [2024-12-13 19:09:44.381209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cc080) on tqpair=0x8859b0
00:25:12.572  [2024-12-13 19:09:44.381219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.572  [2024-12-13 19:09:44.381223] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.572  [2024-12-13 19:09:44.381227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8859b0)
00:25:12.572  [2024-12-13 19:09:44.381234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.572  [2024-12-13 19:09:44.381251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cc080, cid 3, qid 0
00:25:12.572  [2024-12-13 19:09:44.385238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.572  [2024-12-13 19:09:44.385259] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.572  [2024-12-13 19:09:44.385280] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.572  [2024-12-13 19:09:44.385284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cc080) on tqpair=0x8859b0
00:25:12.572  [2024-12-13 19:09:44.385297] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.572  [2024-12-13 19:09:44.385302] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.572  [2024-12-13 19:09:44.385305] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8859b0)
00:25:12.572  [2024-12-13 19:09:44.385313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.572  [2024-12-13 19:09:44.385337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cc080, cid 3, qid 0
00:25:12.572  [2024-12-13 19:09:44.385400] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.572  [2024-12-13 19:09:44.385406] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.572  [2024-12-13 19:09:44.385410] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.573  [2024-12-13 19:09:44.385414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cc080) on tqpair=0x8859b0
00:25:12.573  [2024-12-13 19:09:44.385422] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds
00:25:12.836                                                                                                                                                                                                                            
00:25:12.836   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r '        trtype:tcp         adrfam:IPv4         traddr:10.0.0.3         trsvcid:4420         subnqn:nqn.2016-06.io.spdk:cnode1' -L all
00:25:12.836  [2024-12-13 19:09:44.423254] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:25:12.836  [2024-12-13 19:09:44.423321] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108377 ]
00:25:12.836  [2024-12-13 19:09:44.579039] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout)
00:25:12.836  [2024-12-13 19:09:44.579108] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2
00:25:12.836  [2024-12-13 19:09:44.579115] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420
00:25:12.836  [2024-12-13 19:09:44.579125] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null)
00:25:12.836  [2024-12-13 19:09:44.579132] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix
00:25:12.836  [2024-12-13 19:09:44.579424] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout)
00:25:12.836  [2024-12-13 19:09:44.579483] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xdb29b0 0
00:25:12.836  [2024-12-13 19:09:44.591337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1
00:25:12.836  [2024-12-13 19:09:44.591364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1
00:25:12.836  [2024-12-13 19:09:44.591371] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0
00:25:12.837  [2024-12-13 19:09:44.591375] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0
00:25:12.837  [2024-12-13 19:09:44.591400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.591407] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.591411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb29b0)
00:25:12.837  [2024-12-13 19:09:44.591422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400
00:25:12.837  [2024-12-13 19:09:44.591453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf8c00, cid 0, qid 0
00:25:12.837  [2024-12-13 19:09:44.599271] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.837  [2024-12-13 19:09:44.599289] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.837  [2024-12-13 19:09:44.599294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.599298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf8c00) on tqpair=0xdb29b0
00:25:12.837  [2024-12-13 19:09:44.599311] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001
00:25:12.837  [2024-12-13 19:09:44.599319] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout)
00:25:12.837  [2024-12-13 19:09:44.599325] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout)
00:25:12.837  [2024-12-13 19:09:44.599339] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.599344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.599348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb29b0)
00:25:12.837  [2024-12-13 19:09:44.599357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.837  [2024-12-13 19:09:44.599384] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf8c00, cid 0, qid 0
00:25:12.837  [2024-12-13 19:09:44.599453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.837  [2024-12-13 19:09:44.599460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.837  [2024-12-13 19:09:44.599463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.599468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf8c00) on tqpair=0xdb29b0
00:25:12.837  [2024-12-13 19:09:44.599474] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout)
00:25:12.837  [2024-12-13 19:09:44.599482] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout)
00:25:12.837  [2024-12-13 19:09:44.599490] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.599494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.599498] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb29b0)
00:25:12.837  [2024-12-13 19:09:44.599506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.837  [2024-12-13 19:09:44.599524] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf8c00, cid 0, qid 0
00:25:12.837  [2024-12-13 19:09:44.599577] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.837  [2024-12-13 19:09:44.599598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.837  [2024-12-13 19:09:44.599602] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.599606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf8c00) on tqpair=0xdb29b0
00:25:12.837  [2024-12-13 19:09:44.599612] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout)
00:25:12.837  [2024-12-13 19:09:44.599620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms)
00:25:12.837  [2024-12-13 19:09:44.599627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.599631] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.599635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb29b0)
00:25:12.837  [2024-12-13 19:09:44.599642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.837  [2024-12-13 19:09:44.599660] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf8c00, cid 0, qid 0
00:25:12.837  [2024-12-13 19:09:44.599717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.837  [2024-12-13 19:09:44.599725] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.837  [2024-12-13 19:09:44.599728] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.599733] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf8c00) on tqpair=0xdb29b0
00:25:12.837  [2024-12-13 19:09:44.599738] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms)
00:25:12.837  [2024-12-13 19:09:44.599749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.599753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.599757] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb29b0)
00:25:12.837  [2024-12-13 19:09:44.599764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.837  [2024-12-13 19:09:44.599782] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf8c00, cid 0, qid 0
00:25:12.837  [2024-12-13 19:09:44.599831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.837  [2024-12-13 19:09:44.599837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.837  [2024-12-13 19:09:44.599841] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.599845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf8c00) on tqpair=0xdb29b0
00:25:12.837  [2024-12-13 19:09:44.599850] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0
00:25:12.837  [2024-12-13 19:09:44.599855] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms)
00:25:12.837  [2024-12-13 19:09:44.599863] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms)
00:25:12.837  [2024-12-13 19:09:44.599973] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1
00:25:12.837  [2024-12-13 19:09:44.599979] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms)
00:25:12.837  [2024-12-13 19:09:44.599988] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.599992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.599996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb29b0)
00:25:12.837  [2024-12-13 19:09:44.600003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.837  [2024-12-13 19:09:44.600023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf8c00, cid 0, qid 0
00:25:12.837  [2024-12-13 19:09:44.600072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.837  [2024-12-13 19:09:44.600078] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.837  [2024-12-13 19:09:44.600082] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.600086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf8c00) on tqpair=0xdb29b0
00:25:12.837  [2024-12-13 19:09:44.600091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms)
00:25:12.837  [2024-12-13 19:09:44.600101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.600106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.600110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb29b0)
00:25:12.837  [2024-12-13 19:09:44.600117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.837  [2024-12-13 19:09:44.600134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf8c00, cid 0, qid 0
00:25:12.837  [2024-12-13 19:09:44.600183] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.837  [2024-12-13 19:09:44.600190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.837  [2024-12-13 19:09:44.600194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.600198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf8c00) on tqpair=0xdb29b0
00:25:12.837  [2024-12-13 19:09:44.600203] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready
00:25:12.837  [2024-12-13 19:09:44.600208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms)
00:25:12.837  [2024-12-13 19:09:44.600216] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout)
00:25:12.837  [2024-12-13 19:09:44.600242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms)
00:25:12.837  [2024-12-13 19:09:44.600271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.600295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb29b0)
00:25:12.837  [2024-12-13 19:09:44.600302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.837  [2024-12-13 19:09:44.600326] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf8c00, cid 0, qid 0
00:25:12.837  [2024-12-13 19:09:44.600424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:25:12.837  [2024-12-13 19:09:44.600431] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:25:12.837  [2024-12-13 19:09:44.600435] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.600439] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdb29b0): datao=0, datal=4096, cccid=0
00:25:12.837  [2024-12-13 19:09:44.600444] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdf8c00) on tqpair(0xdb29b0): expected_datao=0, payload_size=4096
00:25:12.837  [2024-12-13 19:09:44.600449] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.600457] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.600461] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.600469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.837  [2024-12-13 19:09:44.600476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.837  [2024-12-13 19:09:44.600479] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.837  [2024-12-13 19:09:44.600484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf8c00) on tqpair=0xdb29b0
00:25:12.837  [2024-12-13 19:09:44.600492] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295
00:25:12.837  [2024-12-13 19:09:44.600498] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072
00:25:12.837  [2024-12-13 19:09:44.600502] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001
00:25:12.837  [2024-12-13 19:09:44.600507] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16
00:25:12.838  [2024-12-13 19:09:44.600512] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1
00:25:12.838  [2024-12-13 19:09:44.600517] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms)
00:25:12.838  [2024-12-13 19:09:44.600531] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms)
00:25:12.838  [2024-12-13 19:09:44.600542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.600547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.600551] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb29b0)
00:25:12.838  [2024-12-13 19:09:44.600558] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0
00:25:12.838  [2024-12-13 19:09:44.600580] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf8c00, cid 0, qid 0
00:25:12.838  [2024-12-13 19:09:44.600635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.838  [2024-12-13 19:09:44.600642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.838  [2024-12-13 19:09:44.600645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.600650] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf8c00) on tqpair=0xdb29b0
00:25:12.838  [2024-12-13 19:09:44.600657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.600662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.600665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb29b0)
00:25:12.838  [2024-12-13 19:09:44.600672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:25:12.838  [2024-12-13 19:09:44.600679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.600683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.600687] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xdb29b0)
00:25:12.838  [2024-12-13 19:09:44.600693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:25:12.838  [2024-12-13 19:09:44.600700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.600704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.600707] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xdb29b0)
00:25:12.838  [2024-12-13 19:09:44.600713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:25:12.838  [2024-12-13 19:09:44.600720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.600724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.600728] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.838  [2024-12-13 19:09:44.600734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:25:12.838  [2024-12-13 19:09:44.600739] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms)
00:25:12.838  [2024-12-13 19:09:44.600757] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms)
00:25:12.838  [2024-12-13 19:09:44.600765] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.600769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdb29b0)
00:25:12.838  [2024-12-13 19:09:44.600776] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.838  [2024-12-13 19:09:44.600798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf8c00, cid 0, qid 0
00:25:12.838  [2024-12-13 19:09:44.600805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf8d80, cid 1, qid 0
00:25:12.838  [2024-12-13 19:09:44.600810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf8f00, cid 2, qid 0
00:25:12.838  [2024-12-13 19:09:44.600815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.838  [2024-12-13 19:09:44.600820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9200, cid 4, qid 0
00:25:12.838  [2024-12-13 19:09:44.600908] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.838  [2024-12-13 19:09:44.600915] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.838  [2024-12-13 19:09:44.600919] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.600923] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9200) on tqpair=0xdb29b0
00:25:12.838  [2024-12-13 19:09:44.600929] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us
00:25:12.838  [2024-12-13 19:09:44.600934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms)
00:25:12.838  [2024-12-13 19:09:44.600947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms)
00:25:12.838  [2024-12-13 19:09:44.600957] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms)
00:25:12.838  [2024-12-13 19:09:44.600965] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.600969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.600973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdb29b0)
00:25:12.838  [2024-12-13 19:09:44.600981] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:25:12.838  [2024-12-13 19:09:44.601000] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9200, cid 4, qid 0
00:25:12.838  [2024-12-13 19:09:44.601055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.838  [2024-12-13 19:09:44.601063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.838  [2024-12-13 19:09:44.601067] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.601071] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9200) on tqpair=0xdb29b0
00:25:12.838  [2024-12-13 19:09:44.601133] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms)
00:25:12.838  [2024-12-13 19:09:44.601144] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms)
00:25:12.838  [2024-12-13 19:09:44.601153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.601157] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdb29b0)
00:25:12.838  [2024-12-13 19:09:44.601165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.838  [2024-12-13 19:09:44.601183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9200, cid 4, qid 0
00:25:12.838  [2024-12-13 19:09:44.601260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:25:12.838  [2024-12-13 19:09:44.601269] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:25:12.838  [2024-12-13 19:09:44.601273] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.601277] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdb29b0): datao=0, datal=4096, cccid=4
00:25:12.838  [2024-12-13 19:09:44.601282] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdf9200) on tqpair(0xdb29b0): expected_datao=0, payload_size=4096
00:25:12.838  [2024-12-13 19:09:44.601287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.601295] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.601299] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.601307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.838  [2024-12-13 19:09:44.601314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.838  [2024-12-13 19:09:44.601317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.601321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9200) on tqpair=0xdb29b0
00:25:12.838  [2024-12-13 19:09:44.601336] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added
00:25:12.838  [2024-12-13 19:09:44.601351] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms)
00:25:12.838  [2024-12-13 19:09:44.601363] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms)
00:25:12.838  [2024-12-13 19:09:44.601371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.601376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdb29b0)
00:25:12.838  [2024-12-13 19:09:44.601383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.838  [2024-12-13 19:09:44.601406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9200, cid 4, qid 0
00:25:12.838  [2024-12-13 19:09:44.601495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:25:12.838  [2024-12-13 19:09:44.601502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:25:12.838  [2024-12-13 19:09:44.601505] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.601509] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdb29b0): datao=0, datal=4096, cccid=4
00:25:12.838  [2024-12-13 19:09:44.601514] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdf9200) on tqpair(0xdb29b0): expected_datao=0, payload_size=4096
00:25:12.838  [2024-12-13 19:09:44.601519] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.601526] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.601530] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.601538] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.838  [2024-12-13 19:09:44.601544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.838  [2024-12-13 19:09:44.601548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.601552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9200) on tqpair=0xdb29b0
00:25:12.838  [2024-12-13 19:09:44.601567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms)
00:25:12.838  [2024-12-13 19:09:44.601590] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms)
00:25:12.838  [2024-12-13 19:09:44.601599] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.838  [2024-12-13 19:09:44.601603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdb29b0)
00:25:12.838  [2024-12-13 19:09:44.601610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.838  [2024-12-13 19:09:44.601630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9200, cid 4, qid 0
00:25:12.838  [2024-12-13 19:09:44.601719] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:25:12.838  [2024-12-13 19:09:44.601727] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:25:12.838  [2024-12-13 19:09:44.601731] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:25:12.839  [2024-12-13 19:09:44.601735] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdb29b0): datao=0, datal=4096, cccid=4
00:25:12.839  [2024-12-13 19:09:44.601740] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdf9200) on tqpair(0xdb29b0): expected_datao=0, payload_size=4096
00:25:12.839  [2024-12-13 19:09:44.601744] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.839  [2024-12-13 19:09:44.601752] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:25:12.839  [2024-12-13 19:09:44.601756] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:25:12.839  [2024-12-13 19:09:44.601764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.839  [2024-12-13 19:09:44.601770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.839  [2024-12-13 19:09:44.601774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.839  [2024-12-13 19:09:44.601778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9200) on tqpair=0xdb29b0
00:25:12.839  [2024-12-13 19:09:44.601787] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms)
00:25:12.839  [2024-12-13 19:09:44.601797] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms)
00:25:12.839  [2024-12-13 19:09:44.601811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms)
00:25:12.839  [2024-12-13 19:09:44.601823] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms)
00:25:12.839  [2024-12-13 19:09:44.601828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms)
00:25:12.839  [2024-12-13 19:09:44.601834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms)
00:25:12.839  [2024-12-13 19:09:44.601840] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID
00:25:12.839  [2024-12-13 19:09:44.601845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms)
00:25:12.839  [2024-12-13 19:09:44.601851] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout)
00:25:12.839  [2024-12-13 19:09:44.601866] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.839  [2024-12-13 19:09:44.601871] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdb29b0)
00:25:12.839  [2024-12-13 19:09:44.601878] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.839  [2024-12-13 19:09:44.601885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.839  [2024-12-13 19:09:44.601890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.839  [2024-12-13 19:09:44.601893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdb29b0)
00:25:12.839  [2024-12-13 19:09:44.601900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 
00:25:12.839  [2024-12-13 19:09:44.601926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9200, cid 4, qid 0
00:25:12.839  [2024-12-13 19:09:44.601934] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9380, cid 5, qid 0
00:25:12.839  [2024-12-13 19:09:44.602004] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.839  [2024-12-13 19:09:44.602011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.839  [2024-12-13 19:09:44.602015] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.839  [2024-12-13 19:09:44.602019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9200) on tqpair=0xdb29b0
00:25:12.839  [2024-12-13 19:09:44.602026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.839  [2024-12-13 19:09:44.602032] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.839  [2024-12-13 19:09:44.602036] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.839  [2024-12-13 19:09:44.602040] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9380) on tqpair=0xdb29b0
00:25:12.839  [2024-12-13 19:09:44.602066] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.839  [2024-12-13 19:09:44.602070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdb29b0)
00:25:12.839  [2024-12-13 19:09:44.602077] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.839  [2024-12-13 19:09:44.602095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9380, cid 5, qid 0
00:25:12.839  =====================================================
00:25:12.839  NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:25:12.839  =====================================================
00:25:12.839  Controller Capabilities/Features
00:25:12.839  ================================
00:25:12.839  Vendor ID:                             8086
00:25:12.839  Subsystem Vendor ID:                   8086
00:25:12.839  Serial Number:                         SPDK00000000000001
00:25:12.839  Model Number:                          SPDK bdev Controller
00:25:12.839  Firmware Version:                      25.01
00:25:12.839  Recommended Arb Burst:                 6
00:25:12.839  IEEE OUI Identifier:                   e4 d2 5c
00:25:12.839  Multi-path I/O
00:25:12.839    May have multiple subsystem ports:   Yes
00:25:12.839    May have multiple controllers:       Yes
00:25:12.839    Associated with SR-IOV VF:           No
00:25:12.839  Max Data Transfer Size:                131072
00:25:12.839  Max Number of Namespaces:              32
00:25:12.839  Max Number of I/O Queues:              127
00:25:12.839  NVMe Specification Version (VS):       1.3
00:25:12.839  NVMe Specification Version (Identify): 1.3
00:25:12.839  Maximum Queue Entries:                 128
00:25:12.839  Contiguous Queues Required:            Yes
00:25:12.839  Arbitration Mechanisms Supported
00:25:12.839    Weighted Round Robin:                Not Supported
00:25:12.839    Vendor Specific:                     Not Supported
00:25:12.839  Reset Timeout:                         15000 ms
00:25:12.839  Doorbell Stride:                       4 bytes
00:25:12.839  NVM Subsystem Reset:                   Not Supported
00:25:12.839  Command Sets Supported
00:25:12.839    NVM Command Set:                     Supported
00:25:12.839  Boot Partition:                        Not Supported
00:25:12.839  Memory Page Size Minimum:              4096 bytes
00:25:12.839  Memory Page Size Maximum:              4096 bytes
00:25:12.839  Persistent Memory Region:              Not Supported
00:25:12.839  Optional Asynchronous Events Supported
00:25:12.839    Namespace Attribute Notices:         Supported
00:25:12.839    Firmware Activation Notices:         Not Supported
00:25:12.839    ANA Change Notices:                  Not Supported
00:25:12.839    PLE Aggregate Log Change Notices:    Not Supported
00:25:12.839    LBA Status Info Alert Notices:       Not Supported
00:25:12.839    EGE Aggregate Log Change Notices:    Not Supported
00:25:12.839    Normal NVM Subsystem Shutdown event: Not Supported
00:25:12.839    Zone Descriptor Change Notices:      Not Supported
00:25:12.839    Discovery Log Change Notices:        Not Supported
00:25:12.839  Controller Attributes
00:25:12.839    128-bit Host Identifier:             Supported
00:25:12.839    Non-Operational Permissive Mode:     Not Supported
00:25:12.839    NVM Sets:                            Not Supported
00:25:12.839    Read Recovery Levels:                Not Supported
00:25:12.839    Endurance Groups:                    Not Supported
00:25:12.839    Predictable Latency Mode:            Not Supported
00:25:12.839    Traffic Based Keep ALive:            Not Supported
00:25:12.839    Namespace Granularity:               Not Supported
00:25:12.839    SQ Associations:                     Not Supported
00:25:12.839    UUID List:                           Not Supported
00:25:12.839    Multi-Domain Subsystem:              Not Supported
00:25:12.839    Fixed Capacity Management:           Not Supported
00:25:12.839    Variable Capacity Management:        Not Supported
00:25:12.839    Delete Endurance Group:              Not Supported
00:25:12.839    Delete NVM Set:                      Not Supported
00:25:12.839    Extended LBA Formats Supported:      Not Supported
00:25:12.839    Flexible Data Placement Supported:   Not Supported
00:25:12.839  
00:25:12.839  Controller Memory Buffer Support
00:25:12.839  ================================
00:25:12.839  Supported:                             No
00:25:12.839  
00:25:12.839  Persistent Memory Region Support
00:25:12.839  ================================
00:25:12.839  Supported:                             No
00:25:12.839  
00:25:12.839  Admin Command Set Attributes
00:25:12.839  ============================
00:25:12.839  Security Send/Receive:                 Not Supported
00:25:12.839  Format NVM:                            Not Supported
00:25:12.839  Firmware Activate/Download:            Not Supported
00:25:12.839  Namespace Management:                  Not Supported
00:25:12.839  Device Self-Test:                      Not Supported
00:25:12.839  Directives:                            Not Supported
00:25:12.839  NVMe-MI:                               Not Supported
00:25:12.839  Virtualization Management:             Not Supported
00:25:12.839  Doorbell Buffer Config:                Not Supported
00:25:12.839  Get LBA Status Capability:             Not Supported
00:25:12.839  Command & Feature Lockdown Capability: Not Supported
00:25:12.839  Abort Command Limit:                   4
00:25:12.839  Async Event Request Limit:             4
00:25:12.839  Number of Firmware Slots:              N/A
00:25:12.839  Firmware Slot 1 Read-Only:             N/A
00:25:12.839  Firmware Activation Without Reset:     N/A
00:25:12.839  Multiple Update Detection Support:     N/A
00:25:12.839  Firmware Update Granularity:           No Information Provided
00:25:12.839  Per-Namespace SMART Log:               No
00:25:12.839  Asymmetric Namespace Access Log Page:  Not Supported
00:25:12.839  Subsystem NQN:                         nqn.2016-06.io.spdk:cnode1
00:25:12.839  Command Effects Log Page:              Supported
00:25:12.839  Get Log Page Extended Data:            Supported
00:25:12.839  Telemetry Log Pages:                   Not Supported
00:25:12.839  Persistent Event Log Pages:            Not Supported
00:25:12.839  Supported Log Pages Log Page:          May Support
00:25:12.839  Commands Supported & Effects Log Page: Not Supported
00:25:12.839  Feature Identifiers & Effects Log Page:May Support
00:25:12.839  NVMe-MI Commands & Effects Log Page:   May Support
00:25:12.839  Data Area 4 for Telemetry Log:         Not Supported
00:25:12.839  Error Log Page Entries Supported:      128
00:25:12.839  Keep Alive:                            Supported
00:25:12.839  Keep Alive Granularity:                10000 ms
00:25:12.839  
00:25:12.839  NVM Command Set Attributes
00:25:12.839  ==========================
00:25:12.839  Submission Queue Entry Size
00:25:12.839    Max:                       64
00:25:12.839    Min:                       64
00:25:12.839  Completion Queue Entry Size
00:25:12.839    Max:                       16
00:25:12.839    Min:                       16
00:25:12.839  Number of Namespaces:        32
00:25:12.839  Compare Command:             Supported
00:25:12.839  Write Uncorrectable Command: Not Supported
00:25:12.839  Dataset Management Command:  Supported
00:25:12.839  Write Zeroes Command:        Supported
00:25:12.840  Set Features Save Field:     Not Supported
00:25:12.840  Reservations:                Supported
00:25:12.840  Timestamp:                   Not Supported
00:25:12.840  Copy:                        Supported
00:25:12.840  Volatile Write Cache:        Present
00:25:12.840  Atomic Write Unit (Normal):  1
00:25:12.840  Atomic Write Unit (PFail):   1
00:25:12.840  Atomic Compare & Write Unit: 1
00:25:12.840  Fused Compare & Write:       Supported
00:25:12.840  Scatter-Gather List
00:25:12.840    SGL Command Set:           Supported
00:25:12.840    SGL Keyed:                 Supported
00:25:12.840    SGL Bit Bucket Descriptor: Not Supported
00:25:12.840    SGL Metadata Pointer:      Not Supported
00:25:12.840    Oversized SGL:             Not Supported
00:25:12.840    SGL Metadata Address:      Not Supported
00:25:12.840    SGL Offset:                Supported
00:25:12.840    Transport SGL Data Block:  Not Supported
00:25:12.840  Replay Protected Memory Block:  Not Supported
00:25:12.840  
00:25:12.840  Firmware Slot Information
00:25:12.840  =========================
00:25:12.840  Active slot:                 1
00:25:12.840  Slot 1 Firmware Revision:    25.01
00:25:12.840  
00:25:12.840  
00:25:12.840  Commands Supported and Effects
00:25:12.840  ==============================
00:25:12.840  Admin Commands
00:25:12.840  --------------
00:25:12.840                    Get Log Page (02h): Supported 
00:25:12.840                        Identify (06h): Supported 
00:25:12.840                           Abort (08h): Supported 
00:25:12.840                    Set Features (09h): Supported 
00:25:12.840                    Get Features (0Ah): Supported 
00:25:12.840      Asynchronous Event Request (0Ch): Supported 
00:25:12.840                      Keep Alive (18h): Supported 
00:25:12.840  I/O Commands
00:25:12.840  ------------
00:25:12.840                           Flush (00h): Supported LBA-Change 
00:25:12.840                           Write (01h): Supported LBA-Change 
00:25:12.840                            Read (02h): Supported 
00:25:12.840                         Compare (05h): Supported 
00:25:12.840                    Write Zeroes (08h): Supported LBA-Change 
00:25:12.840              Dataset Management (09h): Supported LBA-Change 
00:25:12.840                            Copy (19h): Supported LBA-Change 
00:25:12.840  
00:25:12.840  Error Log
00:25:12.840  =========
00:25:12.840  
00:25:12.840  Arbitration
00:25:12.840  ===========
00:25:12.840  Arbitration Burst:           1
00:25:12.840  
00:25:12.840  Power Management
00:25:12.840  ================
00:25:12.840  Number of Power States:          1
00:25:12.840  Current Power State:             Power State #0
00:25:12.840  Power State #0:
00:25:12.840    Max Power:                      0.00 W
00:25:12.840    Non-Operational State:         Operational
00:25:12.840    Entry Latency:                 Not Reported
00:25:12.840    Exit Latency:                  Not Reported
00:25:12.840    Relative Read Throughput:      0
00:25:12.840    Relative Read Latency:         0
00:25:12.840    Relative Write Throughput:     0
00:25:12.840    Relative Write Latency:        0
00:25:12.840    Idle Power:                     Not Reported
00:25:12.840    Active Power:                   Not Reported
00:25:12.840  Non-Operational Permissive Mode: Not Supported
00:25:12.840  
00:25:12.840  Health Information
00:25:12.840  ==================
00:25:12.840  Critical Warnings:
00:25:12.840    Available Spare Space:     OK
00:25:12.840    Temperature:               OK
00:25:12.840    Device Reliability:        OK
00:25:12.840    Read Only:                 No
00:25:12.840    Volatile Memory Backup:    OK
00:25:12.840  Current Temperature:         0 Kelvin (-273 Celsius)
00:25:12.840  Temperature Threshold:       0 Kelvin (-273 Celsius)
00:25:12.840  Available Spare:             0%
00:25:12.840  Available Spare Threshold:   0%
00:25:12.840  Life Percentage Used:[2024-12-13 19:09:44.602149] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.840  [2024-12-13 19:09:44.602161] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.840  [2024-12-13 19:09:44.602166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602170] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9380) on tqpair=0xdb29b0
00:25:12.840  [2024-12-13 19:09:44.602181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdb29b0)
00:25:12.840  [2024-12-13 19:09:44.602193] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.840  [2024-12-13 19:09:44.602211] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9380, cid 5, qid 0
00:25:12.840  [2024-12-13 19:09:44.602291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.840  [2024-12-13 19:09:44.602300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.840  [2024-12-13 19:09:44.602304] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602309] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9380) on tqpair=0xdb29b0
00:25:12.840  [2024-12-13 19:09:44.602320] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602325] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdb29b0)
00:25:12.840  [2024-12-13 19:09:44.602332] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.840  [2024-12-13 19:09:44.602352] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9380, cid 5, qid 0
00:25:12.840  [2024-12-13 19:09:44.602406] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.840  [2024-12-13 19:09:44.602413] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.840  [2024-12-13 19:09:44.602417] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9380) on tqpair=0xdb29b0
00:25:12.840  [2024-12-13 19:09:44.602440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdb29b0)
00:25:12.840  [2024-12-13 19:09:44.602453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.840  [2024-12-13 19:09:44.602461] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602465] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdb29b0)
00:25:12.840  [2024-12-13 19:09:44.602471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.840  [2024-12-13 19:09:44.602479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602484] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xdb29b0)
00:25:12.840  [2024-12-13 19:09:44.602490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.840  [2024-12-13 19:09:44.602498] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602502] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xdb29b0)
00:25:12.840  [2024-12-13 19:09:44.602509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.840  [2024-12-13 19:09:44.602529] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9380, cid 5, qid 0
00:25:12.840  [2024-12-13 19:09:44.602537] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9200, cid 4, qid 0
00:25:12.840  [2024-12-13 19:09:44.602543] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9500, cid 6, qid 0
00:25:12.840  [2024-12-13 19:09:44.602548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9680, cid 7, qid 0
00:25:12.840  [2024-12-13 19:09:44.602710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:25:12.840  [2024-12-13 19:09:44.602716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:25:12.840  [2024-12-13 19:09:44.602720] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602724] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdb29b0): datao=0, datal=8192, cccid=5
00:25:12.840  [2024-12-13 19:09:44.602729] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdf9380) on tqpair(0xdb29b0): expected_datao=0, payload_size=8192
00:25:12.840  [2024-12-13 19:09:44.602733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602749] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602754] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:25:12.840  [2024-12-13 19:09:44.602765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:25:12.840  [2024-12-13 19:09:44.602769] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602772] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdb29b0): datao=0, datal=512, cccid=4
00:25:12.840  [2024-12-13 19:09:44.602777] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdf9200) on tqpair(0xdb29b0): expected_datao=0, payload_size=512
00:25:12.840  [2024-12-13 19:09:44.602781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602787] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602791] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602796] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:25:12.840  [2024-12-13 19:09:44.602802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:25:12.840  [2024-12-13 19:09:44.602805] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602809] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdb29b0): datao=0, datal=512, cccid=6
00:25:12.840  [2024-12-13 19:09:44.602813] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdf9500) on tqpair(0xdb29b0): expected_datao=0, payload_size=512
00:25:12.840  [2024-12-13 19:09:44.602817] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602823] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602827] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:25:12.840  [2024-12-13 19:09:44.602837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:25:12.840  [2024-12-13 19:09:44.602841] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602844] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdb29b0): datao=0, datal=4096, cccid=7
00:25:12.840  [2024-12-13 19:09:44.602849] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdf9680) on tqpair(0xdb29b0): expected_datao=0, payload_size=4096
00:25:12.840  [2024-12-13 19:09:44.602853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602859] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:25:12.840  [2024-12-13 19:09:44.602863] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.602870] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.841  [2024-12-13 19:09:44.602876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.841  [2024-12-13 19:09:44.602880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.602884] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9380) on tqpair=0xdb29b0
00:25:12.841  [2024-12-13 19:09:44.602899] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.841  [2024-12-13 19:09:44.602906] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.841  [2024-12-13 19:09:44.602909] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.602913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9200) on tqpair=0xdb29b0
00:25:12.841  [2024-12-13 19:09:44.602925] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.841  [2024-12-13 19:09:44.602931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.841  [2024-12-13 19:09:44.602935] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.602939] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9500) on tqpair=0xdb29b0
00:25:12.841  [2024-12-13 19:09:44.602946] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.841  [2024-12-13 19:09:44.602952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.841  [2024-12-13 19:09:44.602955] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.602959] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9680) on tqpair=0xdb29b0
00:25:12.841  [2024-12-13 19:09:44.603060] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.603067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xdb29b0)
00:25:12.841  [2024-12-13 19:09:44.603075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.841  [2024-12-13 19:09:44.603098] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9680, cid 7, qid 0
00:25:12.841  [2024-12-13 19:09:44.603163] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.841  [2024-12-13 19:09:44.603170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.841  [2024-12-13 19:09:44.603174] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.603178] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9680) on tqpair=0xdb29b0
00:25:12.841  [2024-12-13 19:09:44.603220] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD
00:25:12.841  [2024-12-13 19:09:44.603249] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf8c00) on tqpair=0xdb29b0
00:25:12.841  [2024-12-13 19:09:44.603256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:12.841  [2024-12-13 19:09:44.603262] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf8d80) on tqpair=0xdb29b0
00:25:12.841  [2024-12-13 19:09:44.606346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:12.841  [2024-12-13 19:09:44.606356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf8f00) on tqpair=0xdb29b0
00:25:12.841  [2024-12-13 19:09:44.606361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:12.841  [2024-12-13 19:09:44.606366] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.841  [2024-12-13 19:09:44.606371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:25:12.841  [2024-12-13 19:09:44.606382] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.606387] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.606391] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.841  [2024-12-13 19:09:44.606400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.841  [2024-12-13 19:09:44.606429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.841  [2024-12-13 19:09:44.606488] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.841  [2024-12-13 19:09:44.606495] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.841  [2024-12-13 19:09:44.606499] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.606503] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.841  [2024-12-13 19:09:44.606512] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.606516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.606520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.841  [2024-12-13 19:09:44.606527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.841  [2024-12-13 19:09:44.606549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.841  [2024-12-13 19:09:44.606618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.841  [2024-12-13 19:09:44.606625] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.841  [2024-12-13 19:09:44.606628] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.606633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.841  [2024-12-13 19:09:44.606638] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us
00:25:12.841  [2024-12-13 19:09:44.606643] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms
00:25:12.841  [2024-12-13 19:09:44.606653] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.606658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.606662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.841  [2024-12-13 19:09:44.606670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.841  [2024-12-13 19:09:44.606687] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.841  [2024-12-13 19:09:44.606758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.841  [2024-12-13 19:09:44.606764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.841  [2024-12-13 19:09:44.606768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.606772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.841  [2024-12-13 19:09:44.606783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.606788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.606792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.841  [2024-12-13 19:09:44.606799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.841  [2024-12-13 19:09:44.606816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.841  [2024-12-13 19:09:44.606865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.841  [2024-12-13 19:09:44.606872] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.841  [2024-12-13 19:09:44.606875] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.606879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.841  [2024-12-13 19:09:44.606889] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.606894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.606898] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.841  [2024-12-13 19:09:44.606905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.841  [2024-12-13 19:09:44.606922] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.841  [2024-12-13 19:09:44.606969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.841  [2024-12-13 19:09:44.606975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.841  [2024-12-13 19:09:44.606979] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.606983] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.841  [2024-12-13 19:09:44.606993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.606998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.607002] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.841  [2024-12-13 19:09:44.607009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.841  [2024-12-13 19:09:44.607026] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.841  [2024-12-13 19:09:44.607075] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.841  [2024-12-13 19:09:44.607081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.841  [2024-12-13 19:09:44.607085] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.841  [2024-12-13 19:09:44.607089] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.842  [2024-12-13 19:09:44.607099] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.607104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.607108] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.842  [2024-12-13 19:09:44.607115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.842  [2024-12-13 19:09:44.607132] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.842  [2024-12-13 19:09:44.607180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.842  [2024-12-13 19:09:44.607186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.842  [2024-12-13 19:09:44.607190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.607194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.842  [2024-12-13 19:09:44.607204] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.607209] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.607213] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.842  [2024-12-13 19:09:44.607220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.842  [2024-12-13 19:09:44.607265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.842  [2024-12-13 19:09:44.607324] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.842  [2024-12-13 19:09:44.607331] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.842  [2024-12-13 19:09:44.607335] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.607339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.842  [2024-12-13 19:09:44.607350] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.607355] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.607359] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.842  [2024-12-13 19:09:44.607367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.842  [2024-12-13 19:09:44.607386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.842  [2024-12-13 19:09:44.607438] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.842  [2024-12-13 19:09:44.607445] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.842  [2024-12-13 19:09:44.607449] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.607453] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.842  [2024-12-13 19:09:44.607463] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.607468] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.607472] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.842  [2024-12-13 19:09:44.607480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.842  [2024-12-13 19:09:44.607497] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.842  [2024-12-13 19:09:44.607545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.842  [2024-12-13 19:09:44.607552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.842  [2024-12-13 19:09:44.607556] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.607560] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.842  [2024-12-13 19:09:44.607571] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.607575] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.607579] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.842  [2024-12-13 19:09:44.607587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.842  [2024-12-13 19:09:44.607604] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.842  [2024-12-13 19:09:44.607656] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.842  [2024-12-13 19:09:44.607663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.842  [2024-12-13 19:09:44.607667] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.607671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.842  [2024-12-13 19:09:44.607682] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.607702] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.607706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.842  [2024-12-13 19:09:44.607713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.842  [2024-12-13 19:09:44.607730] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.842  [2024-12-13 19:09:44.607778] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.842  [2024-12-13 19:09:44.607785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.842  [2024-12-13 19:09:44.607789] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.607793] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.842  [2024-12-13 19:09:44.607803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.607808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.607812] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.842  [2024-12-13 19:09:44.607819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.842  [2024-12-13 19:09:44.607836] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.842  [2024-12-13 19:09:44.607890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.842  [2024-12-13 19:09:44.607896] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.842  [2024-12-13 19:09:44.607900] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.607904] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.842  [2024-12-13 19:09:44.607914] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.607919] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.607923] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.842  [2024-12-13 19:09:44.607930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.842  [2024-12-13 19:09:44.607947] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.842  [2024-12-13 19:09:44.607995] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.842  [2024-12-13 19:09:44.608002] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.842  [2024-12-13 19:09:44.608005] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.608009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.842  [2024-12-13 19:09:44.608020] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.608024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.608028] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.842  [2024-12-13 19:09:44.608035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.842  [2024-12-13 19:09:44.608052] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.842  [2024-12-13 19:09:44.608104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.842  [2024-12-13 19:09:44.608110] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.842  [2024-12-13 19:09:44.608114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.608118] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.842  [2024-12-13 19:09:44.608128] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.608133] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.608137] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.842  [2024-12-13 19:09:44.608144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.842  [2024-12-13 19:09:44.608161] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.842  [2024-12-13 19:09:44.608214] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.842  [2024-12-13 19:09:44.608221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.842  [2024-12-13 19:09:44.608240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.608244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.842  [2024-12-13 19:09:44.608266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.608273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.608277] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.842  [2024-12-13 19:09:44.608284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.842  [2024-12-13 19:09:44.608304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.842  [2024-12-13 19:09:44.608360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.842  [2024-12-13 19:09:44.608367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.842  [2024-12-13 19:09:44.608371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.608375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.842  [2024-12-13 19:09:44.608386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.608391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.608395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.842  [2024-12-13 19:09:44.608402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.842  [2024-12-13 19:09:44.608420] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.842  [2024-12-13 19:09:44.608471] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.842  [2024-12-13 19:09:44.608478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.842  [2024-12-13 19:09:44.608482] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.608486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.842  [2024-12-13 19:09:44.608497] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.608502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.842  [2024-12-13 19:09:44.608505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.843  [2024-12-13 19:09:44.608513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.843  [2024-12-13 19:09:44.608530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.843  [2024-12-13 19:09:44.608583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.843  [2024-12-13 19:09:44.608589] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.843  [2024-12-13 19:09:44.608593] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.608597] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.843  [2024-12-13 19:09:44.608608] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.608628] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.608632] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.843  [2024-12-13 19:09:44.608639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.843  [2024-12-13 19:09:44.608656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.843  [2024-12-13 19:09:44.608704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.843  [2024-12-13 19:09:44.608710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.843  [2024-12-13 19:09:44.608714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.608718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.843  [2024-12-13 19:09:44.608728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.608733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.608737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.843  [2024-12-13 19:09:44.608744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.843  [2024-12-13 19:09:44.608761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.843  [2024-12-13 19:09:44.608815] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.843  [2024-12-13 19:09:44.608821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.843  [2024-12-13 19:09:44.608825] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.608829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.843  [2024-12-13 19:09:44.608839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.608844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.608848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.843  [2024-12-13 19:09:44.608855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.843  [2024-12-13 19:09:44.608872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.843  [2024-12-13 19:09:44.608923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.843  [2024-12-13 19:09:44.608929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.843  [2024-12-13 19:09:44.608933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.608937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.843  [2024-12-13 19:09:44.608947] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.608952] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.608956] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.843  [2024-12-13 19:09:44.608963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.843  [2024-12-13 19:09:44.608980] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.843  [2024-12-13 19:09:44.609031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.843  [2024-12-13 19:09:44.609037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.843  [2024-12-13 19:09:44.609041] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.843  [2024-12-13 19:09:44.609055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609060] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609064] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.843  [2024-12-13 19:09:44.609071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.843  [2024-12-13 19:09:44.609088] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.843  [2024-12-13 19:09:44.609139] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.843  [2024-12-13 19:09:44.609146] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.843  [2024-12-13 19:09:44.609149] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609153] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.843  [2024-12-13 19:09:44.609163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609168] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.843  [2024-12-13 19:09:44.609179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.843  [2024-12-13 19:09:44.609196] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.843  [2024-12-13 19:09:44.609278] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.843  [2024-12-13 19:09:44.609287] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.843  [2024-12-13 19:09:44.609291] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609295] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.843  [2024-12-13 19:09:44.609306] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609311] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.843  [2024-12-13 19:09:44.609322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.843  [2024-12-13 19:09:44.609341] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.843  [2024-12-13 19:09:44.609391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.843  [2024-12-13 19:09:44.609403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.843  [2024-12-13 19:09:44.609407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609412] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.843  [2024-12-13 19:09:44.609423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.843  [2024-12-13 19:09:44.609439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.843  [2024-12-13 19:09:44.609458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.843  [2024-12-13 19:09:44.609511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.843  [2024-12-13 19:09:44.609518] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.843  [2024-12-13 19:09:44.609521] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609526] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.843  [2024-12-13 19:09:44.609536] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609542] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609546] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.843  [2024-12-13 19:09:44.609553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.843  [2024-12-13 19:09:44.609570] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.843  [2024-12-13 19:09:44.609632] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.843  [2024-12-13 19:09:44.609639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.843  [2024-12-13 19:09:44.609643] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.843  [2024-12-13 19:09:44.609669] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609673] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609677] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.843  [2024-12-13 19:09:44.609684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.843  [2024-12-13 19:09:44.609732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.843  [2024-12-13 19:09:44.609784] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.843  [2024-12-13 19:09:44.609791] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.843  [2024-12-13 19:09:44.609795] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609799] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.843  [2024-12-13 19:09:44.609810] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.843  [2024-12-13 19:09:44.609826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.843  [2024-12-13 19:09:44.609844] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.843  [2024-12-13 19:09:44.609895] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.843  [2024-12-13 19:09:44.609902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.843  [2024-12-13 19:09:44.609906] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.843  [2024-12-13 19:09:44.609920] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.843  [2024-12-13 19:09:44.609930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.843  [2024-12-13 19:09:44.609937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.843  [2024-12-13 19:09:44.609955] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.844  [2024-12-13 19:09:44.610005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.844  [2024-12-13 19:09:44.610031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.844  [2024-12-13 19:09:44.610036] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.844  [2024-12-13 19:09:44.610041] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.844  [2024-12-13 19:09:44.610051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.844  [2024-12-13 19:09:44.610056] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.844  [2024-12-13 19:09:44.610060] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.844  [2024-12-13 19:09:44.610068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.844  [2024-12-13 19:09:44.610086] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.844  [2024-12-13 19:09:44.610135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.844  [2024-12-13 19:09:44.610146] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.844  [2024-12-13 19:09:44.610150] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.844  [2024-12-13 19:09:44.610154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.844  [2024-12-13 19:09:44.610165] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.844  [2024-12-13 19:09:44.610170] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.844  [2024-12-13 19:09:44.610174] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.844  [2024-12-13 19:09:44.610181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.844  [2024-12-13 19:09:44.610199] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.844  [2024-12-13 19:09:44.613269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.844  [2024-12-13 19:09:44.613288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.844  [2024-12-13 19:09:44.613293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.844  [2024-12-13 19:09:44.613298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.844  [2024-12-13 19:09:44.613311] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:25:12.844  [2024-12-13 19:09:44.613317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:25:12.844  [2024-12-13 19:09:44.613320] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb29b0)
00:25:12.844  [2024-12-13 19:09:44.613329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:25:12.844  [2024-12-13 19:09:44.613353] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdf9080, cid 3, qid 0
00:25:12.844  [2024-12-13 19:09:44.613410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:25:12.844  [2024-12-13 19:09:44.613417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:25:12.844  [2024-12-13 19:09:44.613421] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:25:12.844  [2024-12-13 19:09:44.613425] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdf9080) on tqpair=0xdb29b0
00:25:12.844  [2024-12-13 19:09:44.613434] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds
00:25:12.844          0%
00:25:12.844  Data Units Read:             0
00:25:12.844  Data Units Written:          0
00:25:12.844  Host Read Commands:          0
00:25:12.844  Host Write Commands:         0
00:25:12.844  Controller Busy Time:        0 minutes
00:25:12.844  Power Cycles:                0
00:25:12.844  Power On Hours:              0 hours
00:25:12.844  Unsafe Shutdowns:            0
00:25:12.844  Unrecoverable Media Errors:  0
00:25:12.844  Lifetime Error Log Entries:  0
00:25:12.844  Warning Temperature Time:    0 minutes
00:25:12.844  Critical Temperature Time:   0 minutes
00:25:12.844  
00:25:12.844  Number of Queues
00:25:12.844  ================
00:25:12.844  Number of I/O Submission Queues:      127
00:25:12.844  Number of I/O Completion Queues:      127
00:25:12.844  
00:25:12.844  Active Namespaces
00:25:12.844  =================
00:25:12.844  Namespace ID:1
00:25:12.844  Error Recovery Timeout:                Unlimited
00:25:12.844  Command Set Identifier:                NVM (00h)
00:25:12.844  Deallocate:                            Supported
00:25:12.844  Deallocated/Unwritten Error:           Not Supported
00:25:12.844  Deallocated Read Value:                Unknown
00:25:12.844  Deallocate in Write Zeroes:            Not Supported
00:25:12.844  Deallocated Guard Field:               0xFFFF
00:25:12.844  Flush:                                 Supported
00:25:12.844  Reservation:                           Supported
00:25:12.844  Namespace Sharing Capabilities:        Multiple Controllers
00:25:12.844  Size (in LBAs):                        131072 (0GiB)
00:25:12.844  Capacity (in LBAs):                    131072 (0GiB)
00:25:12.844  Utilization (in LBAs):                 131072 (0GiB)
00:25:12.844  NGUID:                                 ABCDEF0123456789ABCDEF0123456789
00:25:12.844  EUI64:                                 ABCDEF0123456789
00:25:12.844  UUID:                                  3dbfc942-3f85-4662-96a5-2b62e0a44ed3
00:25:12.844  Thin Provisioning:                     Not Supported
00:25:12.844  Per-NS Atomic Units:                   Yes
00:25:12.844    Atomic Boundary Size (Normal):       0
00:25:12.844    Atomic Boundary Size (PFail):        0
00:25:12.844    Atomic Boundary Offset:              0
00:25:12.844  Maximum Single Source Range Length:    65535
00:25:12.844  Maximum Copy Length:                   65535
00:25:12.844  Maximum Source Range Count:            1
00:25:12.844  NGUID/EUI64 Never Reused:              No
00:25:12.844  Namespace Write Protected:             No
00:25:12.844  Number of LBA Formats:                 1
00:25:12.844  Current LBA Format:                    LBA Format #00
00:25:12.844  LBA Format #00: Data Size:   512  Metadata Size:     0
00:25:12.844  
00:25:12.844   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20}
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:25:13.103  rmmod nvme_tcp
00:25:13.103  rmmod nvme_fabrics
00:25:13.103  rmmod nvme_keyring
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 108334 ']'
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 108334
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 108334 ']'
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 108334
00:25:13.103    19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:25:13.103    19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108334
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:25:13.103  killing process with pid 108334
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108334'
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 108334
00:25:13.103   19:09:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 108334
00:25:13.362   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:25:13.362   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:25:13.362   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:25:13.362   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr
00:25:13.362   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save
00:25:13.362   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:25:13.362   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore
00:25:13.362   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:25:13.362   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:25:13.362   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:25:13.362   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:25:13.362   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:25:13.362   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:25:13.362   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:25:13.362   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:25:13.362   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:25:13.362   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:25:13.362   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:25:13.362   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:25:13.362   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:25:13.362   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:25:13.620   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:25:13.620   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns
00:25:13.620   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:13.620   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:25:13.620    19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:25:13.620   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0
00:25:13.620  
00:25:13.620  real	0m2.345s
00:25:13.620  user	0m5.076s
00:25:13.620  sys	0m0.756s
00:25:13.620   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:25:13.620   19:09:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:25:13.620  ************************************
00:25:13.620  END TEST nvmf_identify
00:25:13.620  ************************************
00:25:13.620   19:09:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp
00:25:13.620   19:09:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:25:13.620   19:09:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:25:13.620   19:09:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:25:13.620  ************************************
00:25:13.620  START TEST nvmf_perf
00:25:13.620  ************************************
00:25:13.620   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp
00:25:13.620  * Looking for test storage...
00:25:13.620  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:25:13.620    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:25:13.620     19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version
00:25:13.620     19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-:
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-:
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<'
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 ))
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:13.880     19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1
00:25:13.880     19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1
00:25:13.880     19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:13.880     19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1
00:25:13.880     19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2
00:25:13.880     19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2
00:25:13.880     19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:13.880     19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:25:13.880  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:13.880  		--rc genhtml_branch_coverage=1
00:25:13.880  		--rc genhtml_function_coverage=1
00:25:13.880  		--rc genhtml_legend=1
00:25:13.880  		--rc geninfo_all_blocks=1
00:25:13.880  		--rc geninfo_unexecuted_blocks=1
00:25:13.880  		
00:25:13.880  		'
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:25:13.880  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:13.880  		--rc genhtml_branch_coverage=1
00:25:13.880  		--rc genhtml_function_coverage=1
00:25:13.880  		--rc genhtml_legend=1
00:25:13.880  		--rc geninfo_all_blocks=1
00:25:13.880  		--rc geninfo_unexecuted_blocks=1
00:25:13.880  		
00:25:13.880  		'
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:25:13.880  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:13.880  		--rc genhtml_branch_coverage=1
00:25:13.880  		--rc genhtml_function_coverage=1
00:25:13.880  		--rc genhtml_legend=1
00:25:13.880  		--rc geninfo_all_blocks=1
00:25:13.880  		--rc geninfo_unexecuted_blocks=1
00:25:13.880  		
00:25:13.880  		'
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:25:13.880  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:13.880  		--rc genhtml_branch_coverage=1
00:25:13.880  		--rc genhtml_function_coverage=1
00:25:13.880  		--rc genhtml_legend=1
00:25:13.880  		--rc geninfo_all_blocks=1
00:25:13.880  		--rc geninfo_unexecuted_blocks=1
00:25:13.880  		
00:25:13.880  		'
00:25:13.880   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:25:13.880     19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:25:13.880    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:25:13.881     19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:25:13.881    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:25:13.881    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:25:13.881    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:25:13.881    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:25:13.881    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:25:13.881    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:25:13.881    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:25:13.881     19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob
00:25:13.881     19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:13.881     19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:13.881     19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:13.881      19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:13.881      19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:13.881      19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:13.881      19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH
00:25:13.881      19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:13.881    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0
00:25:13.881    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:25:13.881    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:25:13.881    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:25:13.881    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:25:13.881    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:25:13.881    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:25:13.881  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:25:13.881    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:25:13.881    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:25:13.881    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:25:13.881    19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:25:13.881  Cannot find device "nvmf_init_br"
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:25:13.881  Cannot find device "nvmf_init_br2"
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:25:13.881  Cannot find device "nvmf_tgt_br"
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:25:13.881  Cannot find device "nvmf_tgt_br2"
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:25:13.881  Cannot find device "nvmf_init_br"
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:25:13.881  Cannot find device "nvmf_init_br2"
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:25:13.881  Cannot find device "nvmf_tgt_br"
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:25:13.881  Cannot find device "nvmf_tgt_br2"
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:25:13.881  Cannot find device "nvmf_br"
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:25:13.881  Cannot find device "nvmf_init_if"
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:25:13.881  Cannot find device "nvmf_init_if2"
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:25:13.881  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:25:13.881  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:25:13.881   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:25:13.882   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:25:13.882   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:25:13.882   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:25:14.140   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:25:14.140   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:25:14.140   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:25:14.140   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:25:14.140   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:25:14.140   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:25:14.140   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:25:14.140   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:25:14.140   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:25:14.140   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:25:14.140   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:25:14.140   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:25:14.140   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:25:14.140   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:25:14.140   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:25:14.140   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:25:14.140   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:25:14.140   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:25:14.140   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:25:14.141  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:25:14.141  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms
00:25:14.141  
00:25:14.141  --- 10.0.0.3 ping statistics ---
00:25:14.141  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:14.141  rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:25:14.141  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:25:14.141  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms
00:25:14.141  
00:25:14.141  --- 10.0.0.4 ping statistics ---
00:25:14.141  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:14.141  rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:25:14.141  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:25:14.141  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms
00:25:14.141  
00:25:14.141  --- 10.0.0.1 ping statistics ---
00:25:14.141  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:14.141  rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:25:14.141  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:25:14.141  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms
00:25:14.141  
00:25:14.141  --- 10.0.0.2 ping statistics ---
00:25:14.141  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:14.141  rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=108596
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 108596
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 108596 ']'
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:25:14.141  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable
00:25:14.141   19:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x
00:25:14.141  [2024-12-13 19:09:45.928623] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:25:14.141  [2024-12-13 19:09:45.928718] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:25:14.399  [2024-12-13 19:09:46.078948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:25:14.399  [2024-12-13 19:09:46.116652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:25:14.399  [2024-12-13 19:09:46.116736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:25:14.399  [2024-12-13 19:09:46.116762] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:25:14.399  [2024-12-13 19:09:46.116770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:25:14.399  [2024-12-13 19:09:46.116776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:25:14.399  [2024-12-13 19:09:46.117979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:25:14.400  [2024-12-13 19:09:46.118089] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:25:14.400  [2024-12-13 19:09:46.118232] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:25:14.400  [2024-12-13 19:09:46.118248] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:25:14.658   19:09:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:25:14.658   19:09:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0
00:25:14.658   19:09:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:25:14.658   19:09:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable
00:25:14.658   19:09:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x
00:25:14.658   19:09:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:25:14.658   19:09:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config
00:25:14.658   19:09:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:25:15.226    19:09:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev
00:25:15.226    19:09:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr'
00:25:15.484   19:09:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0
00:25:15.484    19:09:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:25:15.743   19:09:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0'
00:25:15.743   19:09:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']'
00:25:15.743   19:09:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1'
00:25:15.743   19:09:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']'
00:25:15.743   19:09:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o
00:25:16.002  [2024-12-13 19:09:47.643661] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:25:16.002   19:09:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:25:16.260   19:09:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs
00:25:16.260   19:09:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:25:16.519   19:09:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs
00:25:16.519   19:09:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1
00:25:16.777   19:09:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:25:17.036  [2024-12-13 19:09:48.760969] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:25:17.036   19:09:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420
00:25:17.294   19:09:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']'
00:25:17.295   19:09:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0'
00:25:17.295   19:09:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']'
00:25:17.295   19:09:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0'
00:25:18.671  Initializing NVMe Controllers
00:25:18.671  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:25:18.671  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:25:18.671  Initialization complete. Launching workers.
00:25:18.671  ========================================================
00:25:18.671                                                                             Latency(us)
00:25:18.671  Device Information                     :       IOPS      MiB/s    Average        min        max
00:25:18.671  PCIE (0000:00:10.0) NSID 1 from core  0:   22446.05      87.68    1425.21     295.95    8932.60
00:25:18.671  ========================================================
00:25:18.671  Total                                  :   22446.05      87.68    1425.21     295.95    8932.60
00:25:18.671  
00:25:18.671   19:09:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420'
00:25:19.608  Initializing NVMe Controllers
00:25:19.608  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:25:19.608  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:25:19.608  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:25:19.608  Initialization complete. Launching workers.
00:25:19.608  ========================================================
00:25:19.608                                                                                                               Latency(us)
00:25:19.608  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:25:19.608  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    3847.55      15.03     259.59     100.49    7168.43
00:25:19.608  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:     121.64       0.48    8285.85    5946.23   14957.29
00:25:19.608  ========================================================
00:25:19.608  Total                                                                    :    3969.19      15.50     505.56     100.49   14957.29
00:25:19.608  
00:25:19.867   19:09:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420'
00:25:21.245  Initializing NVMe Controllers
00:25:21.245  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:25:21.245  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:25:21.245  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:25:21.245  Initialization complete. Launching workers.
00:25:21.245  ========================================================
00:25:21.245                                                                                                               Latency(us)
00:25:21.245  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:25:21.245  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    9803.00      38.29    3265.21     672.99    7130.88
00:25:21.245  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:    2676.00      10.45   12059.01    6769.64   20046.12
00:25:21.245  ========================================================
00:25:21.246  Total                                                                    :   12479.00      48.75    5150.96     672.99   20046.12
00:25:21.246  
00:25:21.246   19:09:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]]
00:25:21.246   19:09:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420'
00:25:23.776  Initializing NVMe Controllers
00:25:23.776  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:25:23.776  Controller IO queue size 128, less than required.
00:25:23.776  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:25:23.776  Controller IO queue size 128, less than required.
00:25:23.776  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:25:23.776  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:25:23.776  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:25:23.776  Initialization complete. Launching workers.
00:25:23.776  ========================================================
00:25:23.776                                                                                                               Latency(us)
00:25:23.776  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:25:23.776  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    1775.18     443.80   73317.97   45471.41  134869.41
00:25:23.776  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:     634.17     158.54  212272.70   71347.31  303255.73
00:25:23.776  ========================================================
00:25:23.776  Total                                                                    :    2409.36     602.34  109892.58   45471.41  303255.73
00:25:23.776  
00:25:23.776   19:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4
00:25:24.035  Initializing NVMe Controllers
00:25:24.035  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:25:24.035  Controller IO queue size 128, less than required.
00:25:24.035  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:25:24.035  WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test
00:25:24.035  Controller IO queue size 128, less than required.
00:25:24.035  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:25:24.035  WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test
00:25:24.035  WARNING: Some requested NVMe devices were skipped
00:25:24.035  No valid NVMe controllers or AIO or URING devices found
00:25:24.035   19:09:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat
00:25:26.565  Initializing NVMe Controllers
00:25:26.565  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:25:26.565  Controller IO queue size 128, less than required.
00:25:26.565  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:25:26.565  Controller IO queue size 128, less than required.
00:25:26.565  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:25:26.565  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:25:26.565  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:25:26.565  Initialization complete. Launching workers.
00:25:26.565  
00:25:26.565  ====================
00:25:26.565  lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics:
00:25:26.565  TCP transport:
00:25:26.565  	polls:              10456
00:25:26.565  	idle_polls:         5788
00:25:26.565  	sock_completions:   4668
00:25:26.565  	nvme_completions:   5021
00:25:26.565  	submitted_requests: 7566
00:25:26.565  	queued_requests:    1
00:25:26.565  
00:25:26.565  ====================
00:25:26.565  lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics:
00:25:26.565  TCP transport:
00:25:26.565  	polls:              10848
00:25:26.565  	idle_polls:         7656
00:25:26.565  	sock_completions:   3192
00:25:26.565  	nvme_completions:   6101
00:25:26.565  	submitted_requests: 9090
00:25:26.565  	queued_requests:    1
00:25:26.565  ========================================================
00:25:26.565                                                                                                               Latency(us)
00:25:26.565  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:25:26.565  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    1252.62     313.15  105382.25   45557.95  222170.42
00:25:26.565  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:    1522.10     380.53   84483.66   44507.24  345933.56
00:25:26.565  ========================================================
00:25:26.565  Total                                                                    :    2774.72     693.68   93918.09   44507.24  345933.56
00:25:26.565  
00:25:26.565   19:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync
00:25:26.565   19:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:25:26.824   19:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']'
00:25:26.824   19:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']'
00:25:26.824    19:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0
00:25:27.083   19:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=4228dd7e-086b-431b-81b7-d2aefccac8df
00:25:27.083   19:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 4228dd7e-086b-431b-81b7-d2aefccac8df
00:25:27.083   19:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=4228dd7e-086b-431b-81b7-d2aefccac8df
00:25:27.083   19:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info
00:25:27.083   19:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc
00:25:27.083   19:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs
00:25:27.083    19:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:25:27.341   19:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[
00:25:27.341    {
00:25:27.341      "base_bdev": "Nvme0n1",
00:25:27.341      "block_size": 4096,
00:25:27.341      "cluster_size": 4194304,
00:25:27.341      "free_clusters": 1278,
00:25:27.341      "name": "lvs_0",
00:25:27.341      "total_data_clusters": 1278,
00:25:27.341      "uuid": "4228dd7e-086b-431b-81b7-d2aefccac8df"
00:25:27.341    }
00:25:27.341  ]'
00:25:27.341    19:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="4228dd7e-086b-431b-81b7-d2aefccac8df") .free_clusters'
00:25:27.599   19:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1278
00:25:27.599    19:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="4228dd7e-086b-431b-81b7-d2aefccac8df") .cluster_size'
00:25:27.599  5112
00:25:27.600   19:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304
00:25:27.600   19:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5112
00:25:27.600   19:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5112
00:25:27.600   19:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']'
00:25:27.600    19:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4228dd7e-086b-431b-81b7-d2aefccac8df lbd_0 5112
00:25:27.858   19:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=27c02d46-4f99-4189-be55-c6d33d0e6221
00:25:27.858    19:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 27c02d46-4f99-4189-be55-c6d33d0e6221 lvs_n_0
00:25:28.117   19:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=6f155f8d-b07e-4846-91d7-367a89830034
00:25:28.117   19:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 6f155f8d-b07e-4846-91d7-367a89830034
00:25:28.117   19:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=6f155f8d-b07e-4846-91d7-367a89830034
00:25:28.117   19:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info
00:25:28.117   19:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc
00:25:28.117   19:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs
00:25:28.117    19:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:25:28.375   19:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[
00:25:28.375    {
00:25:28.375      "base_bdev": "Nvme0n1",
00:25:28.375      "block_size": 4096,
00:25:28.375      "cluster_size": 4194304,
00:25:28.375      "free_clusters": 0,
00:25:28.375      "name": "lvs_0",
00:25:28.375      "total_data_clusters": 1278,
00:25:28.375      "uuid": "4228dd7e-086b-431b-81b7-d2aefccac8df"
00:25:28.375    },
00:25:28.375    {
00:25:28.375      "base_bdev": "27c02d46-4f99-4189-be55-c6d33d0e6221",
00:25:28.375      "block_size": 4096,
00:25:28.375      "cluster_size": 4194304,
00:25:28.375      "free_clusters": 1276,
00:25:28.375      "name": "lvs_n_0",
00:25:28.375      "total_data_clusters": 1276,
00:25:28.375      "uuid": "6f155f8d-b07e-4846-91d7-367a89830034"
00:25:28.375    }
00:25:28.375  ]'
00:25:28.375    19:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="6f155f8d-b07e-4846-91d7-367a89830034") .free_clusters'
00:25:28.375   19:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1276
00:25:28.375    19:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="6f155f8d-b07e-4846-91d7-367a89830034") .cluster_size'
00:25:28.635  5104
00:25:28.635   19:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304
00:25:28.635   19:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5104
00:25:28.635   19:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5104
00:25:28.635   19:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']'
00:25:28.635    19:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6f155f8d-b07e-4846-91d7-367a89830034 lbd_nest_0 5104
00:25:28.897   19:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=cccaf65f-4b3d-4399-acbb-61f488f6b6df
00:25:28.897   19:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:25:29.155   19:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid
00:25:29.155   19:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 cccaf65f-4b3d-4399-acbb-61f488f6b6df
00:25:29.414   19:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:25:29.672   19:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128")
00:25:29.672   19:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072")
00:25:29.672   19:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}"
00:25:29.672   19:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}"
00:25:29.672   19:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420'
00:25:29.930  Initializing NVMe Controllers
00:25:29.930  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:25:29.930  WARNING: controller SPDK bdev Controller (SPDK00000000000001  ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512
00:25:29.930  WARNING: Some requested NVMe devices were skipped
00:25:29.930  No valid NVMe controllers or AIO or URING devices found
00:25:29.930   19:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}"
00:25:29.930   19:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420'
00:25:42.133  Initializing NVMe Controllers
00:25:42.133  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:25:42.133  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:25:42.133  Initialization complete. Launching workers.
00:25:42.133  ========================================================
00:25:42.133                                                                                                               Latency(us)
00:25:42.133  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:25:42.133  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:     933.21     116.65    1071.24     344.83    8315.33
00:25:42.133  ========================================================
00:25:42.133  Total                                                                    :     933.21     116.65    1071.24     344.83    8315.33
00:25:42.133  
00:25:42.133   19:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}"
00:25:42.133   19:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}"
00:25:42.133   19:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420'
00:25:42.133  Initializing NVMe Controllers
00:25:42.133  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:25:42.133  WARNING: controller SPDK bdev Controller (SPDK00000000000001  ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512
00:25:42.133  WARNING: Some requested NVMe devices were skipped
00:25:42.133  No valid NVMe controllers or AIO or URING devices found
00:25:42.133   19:10:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}"
00:25:42.133   19:10:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420'
00:25:52.104  Initializing NVMe Controllers
00:25:52.104  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:25:52.104  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:25:52.104  Initialization complete. Launching workers.
00:25:52.104  ========================================================
00:25:52.104                                                                                                               Latency(us)
00:25:52.104  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:25:52.104  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    1115.40     139.42   28748.27    8050.31  284263.82
00:25:52.104  ========================================================
00:25:52.104  Total                                                                    :    1115.40     139.42   28748.27    8050.31  284263.82
00:25:52.104  
00:25:52.104   19:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}"
00:25:52.104   19:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}"
00:25:52.104   19:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420'
00:25:52.104  Initializing NVMe Controllers
00:25:52.104  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:25:52.104  WARNING: controller SPDK bdev Controller (SPDK00000000000001  ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512
00:25:52.104  WARNING: Some requested NVMe devices were skipped
00:25:52.104  No valid NVMe controllers or AIO or URING devices found
00:25:52.104   19:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}"
00:25:52.104   19:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420'
00:26:02.078  Initializing NVMe Controllers
00:26:02.078  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:26:02.078  Controller IO queue size 128, less than required.
00:26:02.078  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:26:02.078  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:26:02.078  Initialization complete. Launching workers.
00:26:02.078  ========================================================
00:26:02.078                                                                                                               Latency(us)
00:26:02.078  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:26:02.078  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    4160.79     520.10   30776.56   10721.34   71530.88
00:26:02.078  ========================================================
00:26:02.078  Total                                                                    :    4160.79     520.10   30776.56   10721.34   71530.88
00:26:02.078  
00:26:02.078   19:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:26:02.078   19:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete cccaf65f-4b3d-4399-acbb-61f488f6b6df
00:26:02.078   19:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0
00:26:02.336   19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 27c02d46-4f99-4189-be55-c6d33d0e6221
00:26:02.593   19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0
00:26:02.851   19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT
00:26:02.851   19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini
00:26:02.851   19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup
00:26:02.851   19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync
00:26:02.851   19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:26:02.851   19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e
00:26:02.851   19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20}
00:26:02.851   19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:26:02.851  rmmod nvme_tcp
00:26:02.851  rmmod nvme_fabrics
00:26:03.110  rmmod nvme_keyring
00:26:03.110   19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:26:03.110   19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e
00:26:03.110   19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0
00:26:03.110   19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 108596 ']'
00:26:03.110   19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 108596
00:26:03.110   19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 108596 ']'
00:26:03.110   19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 108596
00:26:03.110    19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname
00:26:03.110   19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:26:03.110    19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108596
00:26:03.110  killing process with pid 108596
00:26:03.110   19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:26:03.110   19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:26:03.110   19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108596'
00:26:03.110   19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 108596
00:26:03.110   19:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 108596
00:26:04.485   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:26:04.485   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:26:04.485   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:26:04.485   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr
00:26:04.485   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save
00:26:04.485   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:26:04.485   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore
00:26:04.485   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:26:04.485   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:26:04.485   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:26:04.485   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:26:04.485   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:26:04.485   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:26:04.485   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:26:04.485   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:26:04.485   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:26:04.485   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:26:04.485   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:26:04.485   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:26:04.485   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:26:04.744   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:26:04.744   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:26:04.744   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns
00:26:04.744   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:04.744   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:04.744    19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:04.744   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0
00:26:04.744  
00:26:04.744  real	0m51.084s
00:26:04.744  user	3m11.862s
00:26:04.744  sys	0m10.302s
00:26:04.744   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:26:04.744   19:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x
00:26:04.744  ************************************
00:26:04.744  END TEST nvmf_perf
00:26:04.744  ************************************
00:26:04.744   19:10:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp
00:26:04.744   19:10:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:26:04.744   19:10:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:26:04.744   19:10:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:26:04.744  ************************************
00:26:04.744  START TEST nvmf_fio_host
00:26:04.744  ************************************
00:26:04.744   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp
00:26:04.744  * Looking for test storage...
00:26:04.744  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:26:04.744    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:26:04.744     19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version
00:26:04.744     19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-:
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-:
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<'
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 ))
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:26:05.004     19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1
00:26:05.004     19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1
00:26:05.004     19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:26:05.004     19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1
00:26:05.004     19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2
00:26:05.004     19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2
00:26:05.004     19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:26:05.004     19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:26:05.004  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:05.004  		--rc genhtml_branch_coverage=1
00:26:05.004  		--rc genhtml_function_coverage=1
00:26:05.004  		--rc genhtml_legend=1
00:26:05.004  		--rc geninfo_all_blocks=1
00:26:05.004  		--rc geninfo_unexecuted_blocks=1
00:26:05.004  		
00:26:05.004  		'
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:26:05.004  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:05.004  		--rc genhtml_branch_coverage=1
00:26:05.004  		--rc genhtml_function_coverage=1
00:26:05.004  		--rc genhtml_legend=1
00:26:05.004  		--rc geninfo_all_blocks=1
00:26:05.004  		--rc geninfo_unexecuted_blocks=1
00:26:05.004  		
00:26:05.004  		'
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:26:05.004  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:05.004  		--rc genhtml_branch_coverage=1
00:26:05.004  		--rc genhtml_function_coverage=1
00:26:05.004  		--rc genhtml_legend=1
00:26:05.004  		--rc geninfo_all_blocks=1
00:26:05.004  		--rc geninfo_unexecuted_blocks=1
00:26:05.004  		
00:26:05.004  		'
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:26:05.004  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:05.004  		--rc genhtml_branch_coverage=1
00:26:05.004  		--rc genhtml_function_coverage=1
00:26:05.004  		--rc genhtml_legend=1
00:26:05.004  		--rc geninfo_all_blocks=1
00:26:05.004  		--rc geninfo_unexecuted_blocks=1
00:26:05.004  		
00:26:05.004  		'
00:26:05.004   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:26:05.004     19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:05.004     19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:05.004     19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:05.004     19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH
00:26:05.004     19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:05.004   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:26:05.004     19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:26:05.004     19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:26:05.004    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:26:05.004     19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob
00:26:05.004     19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:26:05.004     19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:26:05.005     19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:26:05.005      19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:05.005      19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:05.005      19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:05.005      19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH
00:26:05.005      19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:05.005    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0
00:26:05.005    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:26:05.005    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:26:05.005    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:26:05.005    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:26:05.005    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:26:05.005    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:26:05.005  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:26:05.005    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:26:05.005    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:26:05.005    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:05.005    19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:26:05.005  Cannot find device "nvmf_init_br"
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:26:05.005  Cannot find device "nvmf_init_br2"
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:26:05.005  Cannot find device "nvmf_tgt_br"
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:26:05.005  Cannot find device "nvmf_tgt_br2"
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:26:05.005  Cannot find device "nvmf_init_br"
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:26:05.005  Cannot find device "nvmf_init_br2"
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:26:05.005  Cannot find device "nvmf_tgt_br"
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:26:05.005  Cannot find device "nvmf_tgt_br2"
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:26:05.005  Cannot find device "nvmf_br"
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:26:05.005  Cannot find device "nvmf_init_if"
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:26:05.005  Cannot find device "nvmf_init_if2"
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:26:05.005  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:26:05.005  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:26:05.005   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:26:05.264   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:26:05.264   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:26:05.264   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:26:05.264   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:26:05.264   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:26:05.264   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:26:05.264   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:26:05.264   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:26:05.265   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:26:05.265   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:26:05.265   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:26:05.265   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:26:05.265   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:26:05.265   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:26:05.265   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:26:05.265   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:26:05.265   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:26:05.265   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:26:05.265   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:26:05.265   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:26:05.265   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:26:05.265   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:26:05.265   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:26:05.265   19:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:26:05.265  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:26:05.265  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms
00:26:05.265  
00:26:05.265  --- 10.0.0.3 ping statistics ---
00:26:05.265  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:05.265  rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:26:05.265  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:26:05.265  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms
00:26:05.265  
00:26:05.265  --- 10.0.0.4 ping statistics ---
00:26:05.265  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:05.265  rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:26:05.265  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:26:05.265  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms
00:26:05.265  
00:26:05.265  --- 10.0.0.1 ping statistics ---
00:26:05.265  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:05.265  rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:26:05.265  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:26:05.265  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms
00:26:05.265  
00:26:05.265  --- 10.0.0.2 ping statistics ---
00:26:05.265  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:05.265  rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]]
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=109586
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 109586
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 109586 ']'
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:26:05.265  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable
00:26:05.265   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x
00:26:05.524  [2024-12-13 19:10:37.109511] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:26:05.524  [2024-12-13 19:10:37.109632] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:26:05.524  [2024-12-13 19:10:37.259933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:26:05.524  [2024-12-13 19:10:37.306258] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:26:05.524  [2024-12-13 19:10:37.306330] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:26:05.524  [2024-12-13 19:10:37.306346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:26:05.524  [2024-12-13 19:10:37.306358] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:26:05.524  [2024-12-13 19:10:37.306368] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:26:05.524  [2024-12-13 19:10:37.307692] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:26:05.524  [2024-12-13 19:10:37.307845] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:26:05.524  [2024-12-13 19:10:37.307984] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:26:05.524  [2024-12-13 19:10:37.307994] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:26:05.782   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:26:05.782   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0
00:26:05.782   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:26:06.048  [2024-12-13 19:10:37.736043] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:26:06.048   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt
00:26:06.048   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable
00:26:06.048   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x
00:26:06.048   19:10:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1
00:26:06.325  Malloc1
00:26:06.325   19:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:26:06.584   19:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:26:07.149   19:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:26:07.149  [2024-12-13 19:10:38.903994] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:26:07.149   19:10:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420
00:26:07.407   19:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme
00:26:07.407   19:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096
00:26:07.407   19:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096
00:26:07.407   19:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:26:07.407   19:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:26:07.407   19:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers
00:26:07.407   19:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:26:07.407   19:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift
00:26:07.407   19:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib=
00:26:07.407   19:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:26:07.407    19:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:26:07.407    19:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan
00:26:07.407    19:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:26:07.407   19:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=
00:26:07.407   19:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:26:07.407   19:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:26:07.407    19:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:26:07.407    19:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:26:07.407    19:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:26:07.407   19:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=
00:26:07.407   19:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:26:07.407   19:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:26:07.407   19:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096
00:26:07.664  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:26:07.664  fio-3.35
00:26:07.664  Starting 1 thread
00:26:10.194  
00:26:10.194  test: (groupid=0, jobs=1): err= 0: pid=109698: Fri Dec 13 19:10:41 2024
00:26:10.194    read: IOPS=7540, BW=29.5MiB/s (30.9MB/s)(59.1MiB/2006msec)
00:26:10.194      slat (nsec): min=1812, max=962177, avg=2497.87, stdev=8582.64
00:26:10.194      clat (usec): min=3136, max=13015, avg=8896.34, stdev=1039.82
00:26:10.194       lat (usec): min=3193, max=13017, avg=8898.84, stdev=1039.90
00:26:10.194      clat percentiles (usec):
00:26:10.194       |  1.00th=[ 6456],  5.00th=[ 6915], 10.00th=[ 7242], 20.00th=[ 8160],
00:26:10.194       | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9241],
00:26:10.194       | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[10028], 95.00th=[10290],
00:26:10.194       | 99.00th=[11207], 99.50th=[11863], 99.90th=[12780], 99.95th=[12911],
00:26:10.194       | 99.99th=[13042]
00:26:10.194     bw (  KiB/s): min=27952, max=32928, per=99.82%, avg=30110.00, stdev=2076.45, samples=4
00:26:10.194     iops        : min= 6988, max= 8232, avg=7527.50, stdev=519.11, samples=4
00:26:10.194    write: IOPS=7526, BW=29.4MiB/s (30.8MB/s)(59.0MiB/2006msec); 0 zone resets
00:26:10.194      slat (nsec): min=1845, max=235697, avg=2472.92, stdev=2573.43
00:26:10.194      clat (usec): min=2296, max=12281, avg=8016.16, stdev=921.02
00:26:10.194       lat (usec): min=2308, max=12283, avg=8018.63, stdev=920.93
00:26:10.194      clat percentiles (usec):
00:26:10.194       |  1.00th=[ 5800],  5.00th=[ 6259], 10.00th=[ 6587], 20.00th=[ 7373],
00:26:10.194       | 30.00th=[ 7767], 40.00th=[ 7963], 50.00th=[ 8160], 60.00th=[ 8356],
00:26:10.194       | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 8979], 95.00th=[ 9241],
00:26:10.194       | 99.00th=[10028], 99.50th=[10552], 99.90th=[11207], 99.95th=[11600],
00:26:10.194       | 99.99th=[11731]
00:26:10.194     bw (  KiB/s): min=28928, max=33152, per=99.93%, avg=30086.00, stdev=2051.53, samples=4
00:26:10.194     iops        : min= 7232, max= 8288, avg=7521.50, stdev=512.88, samples=4
00:26:10.194    lat (msec)   : 4=0.11%, 10=94.05%, 20=5.84%
00:26:10.194    cpu          : usr=68.83%, sys=23.19%, ctx=31, majf=0, minf=7
00:26:10.194    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8%
00:26:10.194       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:10.194       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:26:10.194       issued rwts: total=15127,15098,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:10.194       latency   : target=0, window=0, percentile=100.00%, depth=128
00:26:10.194  
00:26:10.194  Run status group 0 (all jobs):
00:26:10.194     READ: bw=29.5MiB/s (30.9MB/s), 29.5MiB/s-29.5MiB/s (30.9MB/s-30.9MB/s), io=59.1MiB (62.0MB), run=2006-2006msec
00:26:10.194    WRITE: bw=29.4MiB/s (30.8MB/s), 29.4MiB/s-29.4MiB/s (30.8MB/s-30.8MB/s), io=59.0MiB (61.8MB), run=2006-2006msec
00:26:10.194   19:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1'
00:26:10.194   19:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1'
00:26:10.194   19:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:26:10.194   19:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:26:10.194   19:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers
00:26:10.194   19:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:26:10.194   19:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift
00:26:10.194   19:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib=
00:26:10.194   19:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:26:10.194    19:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:26:10.194    19:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan
00:26:10.194    19:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:26:10.194   19:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=
00:26:10.194   19:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:26:10.194   19:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:26:10.194    19:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:26:10.194    19:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:26:10.194    19:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:26:10.194   19:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=
00:26:10.194   19:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:26:10.195   19:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:26:10.195   19:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1'
00:26:10.195  test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128
00:26:10.195  fio-3.35
00:26:10.195  Starting 1 thread
00:26:12.722  
00:26:12.722  test: (groupid=0, jobs=1): err= 0: pid=109742: Fri Dec 13 19:10:44 2024
00:26:12.722    read: IOPS=8838, BW=138MiB/s (145MB/s)(277MiB/2007msec)
00:26:12.722      slat (usec): min=2, max=142, avg= 3.36, stdev= 2.41
00:26:12.722      clat (usec): min=2274, max=16186, avg=8599.84, stdev=2130.57
00:26:12.722       lat (usec): min=2277, max=16189, avg=8603.19, stdev=2130.58
00:26:12.722      clat percentiles (usec):
00:26:12.722       |  1.00th=[ 4424],  5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6652],
00:26:12.722       | 30.00th=[ 7308], 40.00th=[ 7963], 50.00th=[ 8586], 60.00th=[ 9110],
00:26:12.722       | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11338], 95.00th=[11994],
00:26:12.722       | 99.00th=[13829], 99.50th=[14877], 99.90th=[16057], 99.95th=[16057],
00:26:12.722       | 99.99th=[16188]
00:26:12.722     bw (  KiB/s): min=62144, max=80544, per=50.42%, avg=71304.00, stdev=8648.11, samples=4
00:26:12.722     iops        : min= 3884, max= 5034, avg=4456.50, stdev=540.51, samples=4
00:26:12.722    write: IOPS=5195, BW=81.2MiB/s (85.1MB/s)(145MiB/1783msec); 0 zone resets
00:26:12.722      slat (usec): min=30, max=327, avg=33.93, stdev= 7.85
00:26:12.722      clat (usec): min=4054, max=21941, avg=10504.98, stdev=1993.02
00:26:12.722       lat (usec): min=4085, max=21973, avg=10538.91, stdev=1992.95
00:26:12.722      clat percentiles (usec):
00:26:12.722       |  1.00th=[ 6915],  5.00th=[ 7767], 10.00th=[ 8160], 20.00th=[ 8848],
00:26:12.722       | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10290], 60.00th=[10814],
00:26:12.722       | 70.00th=[11338], 80.00th=[11994], 90.00th=[13304], 95.00th=[14091],
00:26:12.722       | 99.00th=[15664], 99.50th=[16450], 99.90th=[21103], 99.95th=[21627],
00:26:12.722       | 99.99th=[21890]
00:26:12.722     bw (  KiB/s): min=64896, max=83552, per=89.15%, avg=74104.00, stdev=8676.72, samples=4
00:26:12.722     iops        : min= 4056, max= 5222, avg=4631.50, stdev=542.29, samples=4
00:26:12.722    lat (msec)   : 4=0.22%, 10=62.43%, 20=37.25%, 50=0.10%
00:26:12.722    cpu          : usr=73.78%, sys=17.85%, ctx=7, majf=0, minf=3
00:26:12.722    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6%
00:26:12.722       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:12.722       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:26:12.722       issued rwts: total=17739,9263,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:12.722       latency   : target=0, window=0, percentile=100.00%, depth=128
00:26:12.722  
00:26:12.722  Run status group 0 (all jobs):
00:26:12.722     READ: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=277MiB (291MB), run=2007-2007msec
00:26:12.722    WRITE: bw=81.2MiB/s (85.1MB/s), 81.2MiB/s-81.2MiB/s (85.1MB/s-85.1MB/s), io=145MiB (152MB), run=1783-1783msec
00:26:12.722   19:10:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:26:12.722   19:10:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']'
00:26:12.722   19:10:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs))
00:26:12.722    19:10:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs
00:26:12.722    19:10:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=()
00:26:12.722    19:10:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs
00:26:12.722    19:10:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:26:12.722     19:10:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:26:12.722     19:10:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:26:12.980    19:10:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 2 == 0 ))
00:26:12.980    19:10:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0
00:26:12.980   19:10:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3
00:26:13.237  Nvme0n1
00:26:13.237    19:10:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0
00:26:13.500   19:10:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=873e8e15-b460-41dc-89dc-ab71b7b4fce2
00:26:13.500   19:10:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 873e8e15-b460-41dc-89dc-ab71b7b4fce2
00:26:13.500   19:10:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=873e8e15-b460-41dc-89dc-ab71b7b4fce2
00:26:13.500   19:10:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info
00:26:13.500   19:10:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc
00:26:13.500   19:10:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs
00:26:13.500    19:10:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:26:13.757   19:10:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[
00:26:13.757    {
00:26:13.757      "base_bdev": "Nvme0n1",
00:26:13.757      "block_size": 4096,
00:26:13.757      "cluster_size": 1073741824,
00:26:13.757      "free_clusters": 4,
00:26:13.757      "name": "lvs_0",
00:26:13.757      "total_data_clusters": 4,
00:26:13.757      "uuid": "873e8e15-b460-41dc-89dc-ab71b7b4fce2"
00:26:13.757    }
00:26:13.757  ]'
00:26:13.757    19:10:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="873e8e15-b460-41dc-89dc-ab71b7b4fce2") .free_clusters'
00:26:13.757   19:10:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=4
00:26:13.757    19:10:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="873e8e15-b460-41dc-89dc-ab71b7b4fce2") .cluster_size'
00:26:14.014   19:10:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824
00:26:14.014   19:10:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4096
00:26:14.014  4096
00:26:14.014   19:10:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4096
00:26:14.014   19:10:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096
00:26:14.014  74930a25-14b9-4e2c-ab29-02256701200c
00:26:14.272   19:10:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001
00:26:14.272   19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0
00:26:14.529   19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420
00:26:14.787   19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 	traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096
00:26:14.787   19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 	traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096
00:26:14.787   19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:26:14.787   19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:26:14.787   19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers
00:26:14.787   19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:26:14.787   19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift
00:26:14.787   19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib=
00:26:14.787   19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:26:14.787    19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan
00:26:14.787    19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:26:14.787    19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:26:15.045   19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=
00:26:15.045   19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:26:15.045   19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:26:15.045    19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:26:15.045    19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:26:15.045    19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:26:15.045   19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=
00:26:15.045   19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:26:15.045   19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:26:15.045   19:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 	traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096
00:26:15.045  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:26:15.045  fio-3.35
00:26:15.045  Starting 1 thread
00:26:17.577  
00:26:17.577  test: (groupid=0, jobs=1): err= 0: pid=109899: Fri Dec 13 19:10:49 2024
00:26:17.577    read: IOPS=5759, BW=22.5MiB/s (23.6MB/s)(45.2MiB/2009msec)
00:26:17.577      slat (nsec): min=1865, max=494925, avg=3074.49, stdev=7088.54
00:26:17.577      clat (usec): min=4652, max=21037, avg=11697.89, stdev=1041.72
00:26:17.577       lat (usec): min=4660, max=21039, avg=11700.96, stdev=1041.49
00:26:17.577      clat percentiles (usec):
00:26:17.577       |  1.00th=[ 9503],  5.00th=[10159], 10.00th=[10552], 20.00th=[10814],
00:26:17.577       | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11863],
00:26:17.577       | 70.00th=[12125], 80.00th=[12518], 90.00th=[12911], 95.00th=[13304],
00:26:17.577       | 99.00th=[14222], 99.50th=[14484], 99.90th=[18744], 99.95th=[19530],
00:26:17.577       | 99.99th=[20055]
00:26:17.577     bw (  KiB/s): min=21848, max=23776, per=99.81%, avg=22992.00, stdev=839.68, samples=4
00:26:17.577     iops        : min= 5462, max= 5944, avg=5748.00, stdev=209.92, samples=4
00:26:17.577    write: IOPS=5745, BW=22.4MiB/s (23.5MB/s)(45.1MiB/2009msec); 0 zone resets
00:26:17.577      slat (nsec): min=1923, max=437134, avg=3186.75, stdev=5083.76
00:26:17.577      clat (usec): min=3124, max=19603, avg=10476.55, stdev=935.95
00:26:17.577       lat (usec): min=3136, max=19606, avg=10479.74, stdev=935.92
00:26:17.577      clat percentiles (usec):
00:26:17.577       |  1.00th=[ 8586],  5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765],
00:26:17.577       | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683],
00:26:17.577       | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863],
00:26:17.577       | 99.00th=[12518], 99.50th=[12780], 99.90th=[17433], 99.95th=[18482],
00:26:17.577       | 99.99th=[19006]
00:26:17.577     bw (  KiB/s): min=22784, max=23264, per=99.99%, avg=22978.00, stdev=212.30, samples=4
00:26:17.577     iops        : min= 5696, max= 5816, avg=5744.50, stdev=53.08, samples=4
00:26:17.577    lat (msec)   : 4=0.03%, 10=16.48%, 20=83.48%, 50=0.01%
00:26:17.577    cpu          : usr=69.07%, sys=23.71%, ctx=8, majf=0, minf=16
00:26:17.577    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7%
00:26:17.577       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:17.577       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:26:17.577       issued rwts: total=11570,11542,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:17.577       latency   : target=0, window=0, percentile=100.00%, depth=128
00:26:17.577  
00:26:17.577  Run status group 0 (all jobs):
00:26:17.577     READ: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.2MiB (47.4MB), run=2009-2009msec
00:26:17.577    WRITE: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=45.1MiB (47.3MB), run=2009-2009msec
00:26:17.577   19:10:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2
00:26:17.836    19:10:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0
00:26:18.094   19:10:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=d5865b98-74e8-4d4b-baff-f5a67428494a
00:26:18.094   19:10:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb d5865b98-74e8-4d4b-baff-f5a67428494a
00:26:18.094   19:10:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=d5865b98-74e8-4d4b-baff-f5a67428494a
00:26:18.094   19:10:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info
00:26:18.094   19:10:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc
00:26:18.094   19:10:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs
00:26:18.094    19:10:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:26:18.352   19:10:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[
00:26:18.352    {
00:26:18.352      "base_bdev": "Nvme0n1",
00:26:18.352      "block_size": 4096,
00:26:18.352      "cluster_size": 1073741824,
00:26:18.352      "free_clusters": 0,
00:26:18.352      "name": "lvs_0",
00:26:18.352      "total_data_clusters": 4,
00:26:18.352      "uuid": "873e8e15-b460-41dc-89dc-ab71b7b4fce2"
00:26:18.352    },
00:26:18.352    {
00:26:18.352      "base_bdev": "74930a25-14b9-4e2c-ab29-02256701200c",
00:26:18.352      "block_size": 4096,
00:26:18.352      "cluster_size": 4194304,
00:26:18.352      "free_clusters": 1022,
00:26:18.352      "name": "lvs_n_0",
00:26:18.352      "total_data_clusters": 1022,
00:26:18.352      "uuid": "d5865b98-74e8-4d4b-baff-f5a67428494a"
00:26:18.352    }
00:26:18.352  ]'
00:26:18.352    19:10:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="d5865b98-74e8-4d4b-baff-f5a67428494a") .free_clusters'
00:26:18.610   19:10:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1022
00:26:18.610    19:10:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="d5865b98-74e8-4d4b-baff-f5a67428494a") .cluster_size'
00:26:18.610   19:10:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304
00:26:18.610   19:10:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4088
00:26:18.610  4088
00:26:18.610   19:10:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4088
00:26:18.610   19:10:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088
00:26:18.869  8d256e08-426b-4908-b6be-6f0524537fae
00:26:18.869   19:10:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001
00:26:19.128   19:10:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0
00:26:19.386   19:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420
00:26:19.645   19:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 	traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096
00:26:19.645   19:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 	traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096
00:26:19.645   19:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:26:19.645   19:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:26:19.645   19:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers
00:26:19.645   19:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:26:19.645   19:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift
00:26:19.645   19:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib=
00:26:19.645   19:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:26:19.645    19:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan
00:26:19.645    19:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:26:19.645    19:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:26:19.645   19:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=
00:26:19.645   19:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:26:19.645   19:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:26:19.645    19:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:26:19.645    19:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:26:19.645    19:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:26:19.645   19:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=
00:26:19.645   19:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:26:19.645   19:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:26:19.645   19:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 	traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096
00:26:19.645  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:26:19.645  fio-3.35
00:26:19.645  Starting 1 thread
00:26:22.177  
00:26:22.177  test: (groupid=0, jobs=1): err= 0: pid=110027: Fri Dec 13 19:10:53 2024
00:26:22.177    read: IOPS=5446, BW=21.3MiB/s (22.3MB/s)(42.7MiB/2009msec)
00:26:22.177      slat (nsec): min=1869, max=494767, avg=3023.94, stdev=6887.56
00:26:22.177      clat (usec): min=5169, max=20343, avg=12379.90, stdev=1110.36
00:26:22.177       lat (usec): min=5209, max=20346, avg=12382.93, stdev=1110.14
00:26:22.177      clat percentiles (usec):
00:26:22.177       |  1.00th=[10028],  5.00th=[10814], 10.00th=[11076], 20.00th=[11469],
00:26:22.177       | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649],
00:26:22.177       | 70.00th=[12911], 80.00th=[13304], 90.00th=[13698], 95.00th=[14222],
00:26:22.177       | 99.00th=[15139], 99.50th=[15533], 99.90th=[19268], 99.95th=[19530],
00:26:22.177       | 99.99th=[19792]
00:26:22.177     bw (  KiB/s): min=20256, max=22640, per=99.77%, avg=21736.00, stdev=1028.70, samples=4
00:26:22.177     iops        : min= 5064, max= 5660, avg=5434.00, stdev=257.17, samples=4
00:26:22.177    write: IOPS=5423, BW=21.2MiB/s (22.2MB/s)(42.6MiB/2009msec); 0 zone resets
00:26:22.177      slat (nsec): min=1902, max=357477, avg=3078.09, stdev=4574.98
00:26:22.177      clat (usec): min=3091, max=19452, avg=11099.73, stdev=994.82
00:26:22.177       lat (usec): min=3105, max=19454, avg=11102.80, stdev=994.76
00:26:22.177      clat percentiles (usec):
00:26:22.177       |  1.00th=[ 8979],  5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290],
00:26:22.177       | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338],
00:26:22.177       | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12649],
00:26:22.177       | 99.00th=[13304], 99.50th=[13698], 99.90th=[16188], 99.95th=[18220],
00:26:22.177       | 99.99th=[19530]
00:26:22.177     bw (  KiB/s): min=21192, max=22144, per=99.96%, avg=21686.00, stdev=422.08, samples=4
00:26:22.177     iops        : min= 5298, max= 5536, avg=5421.50, stdev=105.52, samples=4
00:26:22.177    lat (msec)   : 4=0.02%, 10=6.37%, 20=93.60%, 50=0.01%
00:26:22.177    cpu          : usr=75.25%, sys=19.32%, ctx=24, majf=0, minf=16
00:26:22.177    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7%
00:26:22.177       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:26:22.177       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:26:22.177       issued rwts: total=10942,10896,0,0 short=0,0,0,0 dropped=0,0,0,0
00:26:22.177       latency   : target=0, window=0, percentile=100.00%, depth=128
00:26:22.177  
00:26:22.177  Run status group 0 (all jobs):
00:26:22.177     READ: bw=21.3MiB/s (22.3MB/s), 21.3MiB/s-21.3MiB/s (22.3MB/s-22.3MB/s), io=42.7MiB (44.8MB), run=2009-2009msec
00:26:22.177    WRITE: bw=21.2MiB/s (22.2MB/s), 21.2MiB/s-21.2MiB/s (22.2MB/s-22.2MB/s), io=42.6MiB (44.6MB), run=2009-2009msec
00:26:22.177   19:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3
00:26:22.436   19:10:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync
00:26:22.436   19:10:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0
00:26:22.694   19:10:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0
00:26:22.952   19:10:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0
00:26:23.210   19:10:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0
00:26:23.471   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0
00:26:23.733   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:26:23.733   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state
00:26:23.733   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini
00:26:23.733   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup
00:26:23.733   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync
00:26:23.733   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:26:23.733   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e
00:26:23.733   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20}
00:26:23.733   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:26:23.733  rmmod nvme_tcp
00:26:23.996  rmmod nvme_fabrics
00:26:23.996  rmmod nvme_keyring
00:26:23.996   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:26:23.996   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e
00:26:23.996   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0
00:26:23.996   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 109586 ']'
00:26:23.996   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 109586
00:26:23.996   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 109586 ']'
00:26:23.996   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 109586
00:26:23.996    19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname
00:26:23.996   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:26:23.996    19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109586
00:26:23.996  killing process with pid 109586
00:26:23.996   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:26:23.996   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:26:23.996   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109586'
00:26:23.996   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 109586
00:26:23.996   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 109586
00:26:24.259   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:26:24.259   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:26:24.259   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:26:24.259   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr
00:26:24.259   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save
00:26:24.259   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:26:24.259   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore
00:26:24.259   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:26:24.259   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:26:24.259   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:26:24.259   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:26:24.259   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:26:24.259   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:26:24.259   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:26:24.259   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:26:24.259   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:26:24.259   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:26:24.259   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:26:24.259   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:26:24.259   19:10:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:26:24.259   19:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:26:24.259   19:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:26:24.259   19:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns
00:26:24.259   19:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:24.259   19:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:24.259    19:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:24.518   19:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0
00:26:24.518  ************************************
00:26:24.518  END TEST nvmf_fio_host
00:26:24.518  ************************************
00:26:24.518  
00:26:24.518  real	0m19.666s
00:26:24.518  user	1m25.935s
00:26:24.518  sys	0m4.634s
00:26:24.518   19:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable
00:26:24.518   19:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x
00:26:24.518   19:10:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp
00:26:24.518   19:10:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:26:24.518   19:10:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:26:24.518   19:10:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:26:24.518  ************************************
00:26:24.518  START TEST nvmf_failover
00:26:24.518  ************************************
00:26:24.518   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp
00:26:24.518  * Looking for test storage...
00:26:24.518  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:26:24.518    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:26:24.518     19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version
00:26:24.518     19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:26:24.518    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:26:24.518    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:26:24.518    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l
00:26:24.518    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l
00:26:24.518    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-:
00:26:24.518    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1
00:26:24.519    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-:
00:26:24.519    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2
00:26:24.519    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<'
00:26:24.519    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2
00:26:24.519    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1
00:26:24.519    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:26:24.519    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in
00:26:24.519    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1
00:26:24.519    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 ))
00:26:24.519    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:26:24.519     19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1
00:26:24.519     19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1
00:26:24.519     19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:26:24.519     19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1
00:26:24.519    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1
00:26:24.519     19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2
00:26:24.519     19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2
00:26:24.519     19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:26:24.519     19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2
00:26:24.519    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2
00:26:24.519    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:26:24.519    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:26:24.519    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0
00:26:24.519    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:26:24.519    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:26:24.519  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:24.519  		--rc genhtml_branch_coverage=1
00:26:24.519  		--rc genhtml_function_coverage=1
00:26:24.519  		--rc genhtml_legend=1
00:26:24.519  		--rc geninfo_all_blocks=1
00:26:24.519  		--rc geninfo_unexecuted_blocks=1
00:26:24.519  		
00:26:24.519  		'
00:26:24.519    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:26:24.519  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:24.519  		--rc genhtml_branch_coverage=1
00:26:24.519  		--rc genhtml_function_coverage=1
00:26:24.519  		--rc genhtml_legend=1
00:26:24.519  		--rc geninfo_all_blocks=1
00:26:24.519  		--rc geninfo_unexecuted_blocks=1
00:26:24.519  		
00:26:24.519  		'
00:26:24.519    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:26:24.519  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:24.519  		--rc genhtml_branch_coverage=1
00:26:24.519  		--rc genhtml_function_coverage=1
00:26:24.519  		--rc genhtml_legend=1
00:26:24.519  		--rc geninfo_all_blocks=1
00:26:24.519  		--rc geninfo_unexecuted_blocks=1
00:26:24.519  		
00:26:24.519  		'
00:26:24.519    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:26:24.519  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:24.519  		--rc genhtml_branch_coverage=1
00:26:24.519  		--rc genhtml_function_coverage=1
00:26:24.519  		--rc genhtml_legend=1
00:26:24.519  		--rc geninfo_all_blocks=1
00:26:24.519  		--rc geninfo_unexecuted_blocks=1
00:26:24.519  		
00:26:24.519  		'
00:26:24.519   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:26:24.519     19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s
00:26:24.778    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:26:24.778    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:26:24.778    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:26:24.778    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:26:24.778    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:26:24.778    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:26:24.778    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:26:24.778    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:26:24.778    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:26:24.778     19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:26:24.778    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:26:24.778    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:26:24.778    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:26:24.778    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:26:24.778    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:26:24.778    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:26:24.778    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:26:24.778     19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob
00:26:24.778     19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:26:24.778     19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:26:24.778     19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:26:24.778      19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:24.778      19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:24.778      19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:24.778      19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH
00:26:24.778      19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:24.778    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0
00:26:24.778    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:26:24.779    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:26:24.779    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:26:24.779    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:26:24.779    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:26:24.779    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:26:24.779  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:26:24.779    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:26:24.779    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:26:24.779    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:24.779    19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:26:24.779  Cannot find device "nvmf_init_br"
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:26:24.779  Cannot find device "nvmf_init_br2"
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:26:24.779  Cannot find device "nvmf_tgt_br"
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:26:24.779  Cannot find device "nvmf_tgt_br2"
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:26:24.779  Cannot find device "nvmf_init_br"
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:26:24.779  Cannot find device "nvmf_init_br2"
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:26:24.779  Cannot find device "nvmf_tgt_br"
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:26:24.779  Cannot find device "nvmf_tgt_br2"
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:26:24.779  Cannot find device "nvmf_br"
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:26:24.779  Cannot find device "nvmf_init_if"
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:26:24.779  Cannot find device "nvmf_init_if2"
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:26:24.779  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:26:24.779  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:26:24.779   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:26:25.038  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:26:25.038  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms
00:26:25.038  
00:26:25.038  --- 10.0.0.3 ping statistics ---
00:26:25.038  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:25.038  rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:26:25.038  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:26:25.038  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms
00:26:25.038  
00:26:25.038  --- 10.0.0.4 ping statistics ---
00:26:25.038  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:25.038  rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:26:25.038  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:26:25.038  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms
00:26:25.038  
00:26:25.038  --- 10.0.0.1 ping statistics ---
00:26:25.038  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:25.038  rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:26:25.038  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:26:25.038  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms
00:26:25.038  
00:26:25.038  --- 10.0.0.2 ping statistics ---
00:26:25.038  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:25.038  rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:26:25.038   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable
00:26:25.039   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:26:25.039   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=110349
00:26:25.039   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:26:25.039   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 110349
00:26:25.039   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 110349 ']'
00:26:25.039  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:26:25.039   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:26:25.039   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100
00:26:25.039   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:26:25.039   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable
00:26:25.039   19:10:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:26:25.039  [2024-12-13 19:10:56.823433] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:26:25.039  [2024-12-13 19:10:56.823736] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:26:25.297  [2024-12-13 19:10:56.970824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:26:25.297  [2024-12-13 19:10:57.009524] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:26:25.297  [2024-12-13 19:10:57.009574] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:26:25.297  [2024-12-13 19:10:57.009585] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:26:25.297  [2024-12-13 19:10:57.009592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:26:25.297  [2024-12-13 19:10:57.009598] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:26:25.297  [2024-12-13 19:10:57.010830] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:26:25.297  [2024-12-13 19:10:57.011031] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:26:25.297  [2024-12-13 19:10:57.011034] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:26:26.232   19:10:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:26:26.232   19:10:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0
00:26:26.232   19:10:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:26:26.232   19:10:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable
00:26:26.232   19:10:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:26:26.232   19:10:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:26:26.232   19:10:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:26:26.232  [2024-12-13 19:10:58.051514] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:26:26.491   19:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0
00:26:26.749  Malloc0
00:26:26.749   19:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:26:27.007   19:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:26:27.266   19:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:26:27.524  [2024-12-13 19:10:59.260196] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:26:27.524   19:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421
00:26:27.783  [2024-12-13 19:10:59.548404] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 ***
00:26:27.783   19:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422
00:26:28.042  [2024-12-13 19:10:59.776637] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 ***
00:26:28.042   19:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=110461
00:26:28.042   19:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f
00:26:28.042   19:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:26:28.042   19:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 110461 /var/tmp/bdevperf.sock
00:26:28.042   19:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 110461 ']'
00:26:28.042   19:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:26:28.042   19:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100
00:26:28.042  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:26:28.042   19:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:26:28.042   19:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable
00:26:28.042   19:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:26:29.418   19:11:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:26:29.418   19:11:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0
00:26:29.418   19:11:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:26:29.418  NVMe0n1
00:26:29.418   19:11:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:26:29.676  
00:26:29.934   19:11:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=110503
00:26:29.934   19:11:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:26:29.934   19:11:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1
00:26:30.869   19:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:26:31.127  [2024-12-13 19:11:02.775515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127  [2024-12-13 19:11:02.775585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127  [2024-12-13 19:11:02.775596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127  [2024-12-13 19:11:02.775620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127  [2024-12-13 19:11:02.775643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127  [2024-12-13 19:11:02.775650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127  [2024-12-13 19:11:02.775657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127  [2024-12-13 19:11:02.775665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127  [2024-12-13 19:11:02.775673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127  [2024-12-13 19:11:02.775680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127  [2024-12-13 19:11:02.775687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127  [2024-12-13 19:11:02.775694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127  [2024-12-13 19:11:02.775701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127  [2024-12-13 19:11:02.775709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127  [2024-12-13 19:11:02.775716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127  [2024-12-13 19:11:02.775723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127  [2024-12-13 19:11:02.775730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127  [2024-12-13 19:11:02.775737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127  [2024-12-13 19:11:02.775744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127  [2024-12-13 19:11:02.775751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127  [2024-12-13 19:11:02.775758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127  [2024-12-13 19:11:02.775765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127  [2024-12-13 19:11:02.775773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127  [2024-12-13 19:11:02.775780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d340 is same with the state(6) to be set
00:26:31.127   19:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3
00:26:34.408   19:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:26:34.408  
00:26:34.408   19:11:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421
00:26:34.668  [2024-12-13 19:11:06.408010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.668  [2024-12-13 19:11:06.408491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.408992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.409000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.409007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.409014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.409027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.409041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.409048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.409056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.409063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.409071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.409078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.409086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.409094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.409103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.409120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.409128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.409135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.409142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.409150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.409157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.409164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669  [2024-12-13 19:11:06.409172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202dca0 is same with the state(6) to be set
00:26:34.669   19:11:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3
00:26:37.956   19:11:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:26:37.956  [2024-12-13 19:11:09.696436] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:26:37.956   19:11:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1
00:26:39.333   19:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422
00:26:39.333  [2024-12-13 19:11:11.014190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.333  [2024-12-13 19:11:11.014720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.334  [2024-12-13 19:11:11.014727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.334  [2024-12-13 19:11:11.014734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388b0 is same with the state(6) to be set
00:26:39.334   19:11:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 110503
00:26:45.906  {
00:26:45.906    "results": [
00:26:45.906      {
00:26:45.906        "job": "NVMe0n1",
00:26:45.906        "core_mask": "0x1",
00:26:45.906        "workload": "verify",
00:26:45.906        "status": "finished",
00:26:45.906        "verify_range": {
00:26:45.906          "start": 0,
00:26:45.906          "length": 16384
00:26:45.906        },
00:26:45.906        "queue_depth": 128,
00:26:45.906        "io_size": 4096,
00:26:45.906        "runtime": 15.006784,
00:26:45.906        "iops": 9835.884890460207,
00:26:45.906        "mibps": 38.421425353360185,
00:26:45.906        "io_failed": 3421,
00:26:45.906        "io_timeout": 0,
00:26:45.906        "avg_latency_us": 12689.879863768188,
00:26:45.906        "min_latency_us": 573.44,
00:26:45.906        "max_latency_us": 50760.61090909091
00:26:45.906      }
00:26:45.906    ],
00:26:45.906    "core_count": 1
00:26:45.906  }
00:26:45.906   19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 110461
00:26:45.906   19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 110461 ']'
00:26:45.906   19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 110461
00:26:45.906    19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname
00:26:45.906   19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:26:45.906    19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110461
00:26:45.906  killing process with pid 110461
00:26:45.906   19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:26:45.906   19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:26:45.906   19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110461'
00:26:45.906   19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 110461
00:26:45.906   19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 110461
00:26:45.906   19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt
00:26:45.906  [2024-12-13 19:10:59.863281] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:26:45.906  [2024-12-13 19:10:59.863476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110461 ]
00:26:45.906  [2024-12-13 19:11:00.019871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:45.906  [2024-12-13 19:11:00.070133] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:26:45.906  Running I/O for 15 seconds...
00:26:45.906      10544.00 IOPS,    41.19 MiB/s
[2024-12-13T19:11:17.730Z] [2024-12-13 19:11:02.776720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.906  [2024-12-13 19:11:02.776766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.906  [2024-12-13 19:11:02.776795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.906  [2024-12-13 19:11:02.776815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.906  [2024-12-13 19:11:02.776835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.906  [2024-12-13 19:11:02.776853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.906  [2024-12-13 19:11:02.776872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.906  [2024-12-13 19:11:02.776888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.906  [2024-12-13 19:11:02.776906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.906  [2024-12-13 19:11:02.776923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.906  [2024-12-13 19:11:02.776941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.906  [2024-12-13 19:11:02.776957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.906  [2024-12-13 19:11:02.776975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.906  [2024-12-13 19:11:02.776991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.906  [2024-12-13 19:11:02.777009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.906  [2024-12-13 19:11:02.777025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.906  [2024-12-13 19:11:02.777043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.906  [2024-12-13 19:11:02.777060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.906  [2024-12-13 19:11:02.777078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.906  [2024-12-13 19:11:02.777094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.906  [2024-12-13 19:11:02.777112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.906  [2024-12-13 19:11:02.777128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.906  [2024-12-13 19:11:02.777178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.906  [2024-12-13 19:11:02.777197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.777215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.777232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.777249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.777301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.777339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.777364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.777385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.777406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.777426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.777444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.777463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.777480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.777498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.777515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.777533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.777550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.777586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.777602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.777620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.777636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.777654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.777670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.777688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.777727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.777748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.777766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.777784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.777801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.777819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.777836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.777853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.777870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.777887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.777903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.777921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.777938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.777956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.777972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.777990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.907  [2024-12-13 19:11:02.778007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.778025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.778041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.778059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.778075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.778093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.778109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.778127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.778143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.778171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.778188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.778206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.778236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.778257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.778274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.778292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.778308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.778326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.778342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.778360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.778376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.778394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.778410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.778428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.778444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.778462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.778478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.778497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.778513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.778532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.778548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.778566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.778582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.778600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.778616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.778644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.778661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.778679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.778695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.778713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.907  [2024-12-13 19:11:02.778747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.907  [2024-12-13 19:11:02.778766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.908  [2024-12-13 19:11:02.778782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.778801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.908  [2024-12-13 19:11:02.778835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.778854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.908  [2024-12-13 19:11:02.778871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.778906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.908  [2024-12-13 19:11:02.778923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.778950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.908  [2024-12-13 19:11:02.778967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.778987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.908  [2024-12-13 19:11:02.779004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.779024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.908  [2024-12-13 19:11:02.779042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.779063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.908  [2024-12-13 19:11:02.779080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.779109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.908  [2024-12-13 19:11:02.779143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.779163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.908  [2024-12-13 19:11:02.779188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.779209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.908  [2024-12-13 19:11:02.779226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.779245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.908  [2024-12-13 19:11:02.779262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.779294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.908  [2024-12-13 19:11:02.779315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.779334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.908  [2024-12-13 19:11:02.779352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.779371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.908  [2024-12-13 19:11:02.779388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.779407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.908  [2024-12-13 19:11:02.779424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.779443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.908  [2024-12-13 19:11:02.779460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.779479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.908  [2024-12-13 19:11:02.779496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.779515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.908  [2024-12-13 19:11:02.779532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.779571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.908  [2024-12-13 19:11:02.779592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96664 len:8 PRP1 0x0 PRP2 0x0
00:26:45.908  [2024-12-13 19:11:02.779609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.779630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.908  [2024-12-13 19:11:02.779645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.908  [2024-12-13 19:11:02.779659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96672 len:8 PRP1 0x0 PRP2 0x0
00:26:45.908  [2024-12-13 19:11:02.779675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.779704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.908  [2024-12-13 19:11:02.779725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.908  [2024-12-13 19:11:02.779739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96680 len:8 PRP1 0x0 PRP2 0x0
00:26:45.908  [2024-12-13 19:11:02.779761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.779779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.908  [2024-12-13 19:11:02.779792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.908  [2024-12-13 19:11:02.779805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96688 len:8 PRP1 0x0 PRP2 0x0
00:26:45.908  [2024-12-13 19:11:02.779821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.779838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.908  [2024-12-13 19:11:02.779851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.908  [2024-12-13 19:11:02.779864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96696 len:8 PRP1 0x0 PRP2 0x0
00:26:45.908  [2024-12-13 19:11:02.779881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.779897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.908  [2024-12-13 19:11:02.779910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.908  [2024-12-13 19:11:02.779923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96704 len:8 PRP1 0x0 PRP2 0x0
00:26:45.908  [2024-12-13 19:11:02.779939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.779957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.908  [2024-12-13 19:11:02.779969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.908  [2024-12-13 19:11:02.779982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96712 len:8 PRP1 0x0 PRP2 0x0
00:26:45.908  [2024-12-13 19:11:02.780000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.780017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.908  [2024-12-13 19:11:02.780029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.908  [2024-12-13 19:11:02.780042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96720 len:8 PRP1 0x0 PRP2 0x0
00:26:45.908  [2024-12-13 19:11:02.780058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.780074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.908  [2024-12-13 19:11:02.780087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.908  [2024-12-13 19:11:02.780101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96728 len:8 PRP1 0x0 PRP2 0x0
00:26:45.908  [2024-12-13 19:11:02.780118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.780135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.908  [2024-12-13 19:11:02.780147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.908  [2024-12-13 19:11:02.780161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96736 len:8 PRP1 0x0 PRP2 0x0
00:26:45.908  [2024-12-13 19:11:02.780186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.780204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.908  [2024-12-13 19:11:02.780234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.908  [2024-12-13 19:11:02.780251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95992 len:8 PRP1 0x0 PRP2 0x0
00:26:45.908  [2024-12-13 19:11:02.780273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.780292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.908  [2024-12-13 19:11:02.780305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.908  [2024-12-13 19:11:02.780318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96000 len:8 PRP1 0x0 PRP2 0x0
00:26:45.908  [2024-12-13 19:11:02.780334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.780351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.908  [2024-12-13 19:11:02.780364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.908  [2024-12-13 19:11:02.780377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96008 len:8 PRP1 0x0 PRP2 0x0
00:26:45.908  [2024-12-13 19:11:02.780393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.908  [2024-12-13 19:11:02.780410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.908  [2024-12-13 19:11:02.780422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.780435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96016 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.780452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.780469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.780481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.780494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96024 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.780510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.780527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.780541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.780554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96032 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.780570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.780588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.780601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.780614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96040 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.780630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.780646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.780659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.780680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96744 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.780698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.780716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.780733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.780747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96752 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.780768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.780786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.780799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.780812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96760 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.780828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.780845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.780858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.780871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96768 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.780887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.780904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.780917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.780931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96776 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.780946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.780963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.780977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.780990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96784 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.781006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.781023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.781036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.781049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96792 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.781065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.781082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.781095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.781108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96800 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.781124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.781141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.781162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.781176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96808 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.781192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.781210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.781240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.781256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96816 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.781277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.781295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.781308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.781322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96824 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.781338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.781355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.781368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.781381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96832 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.781397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.781414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.781427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.781440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96840 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.781456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.781473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.781486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.781500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96848 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.781516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.781533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.781545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.781559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96856 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.781575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.781591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.781604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.781617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96864 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.781633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.781659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.781673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.781686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96872 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.781712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.781731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.781749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.781763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96880 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.781785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.781803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.781824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.781837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96888 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.781853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.781870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.781883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.909  [2024-12-13 19:11:02.781897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96896 len:8 PRP1 0x0 PRP2 0x0
00:26:45.909  [2024-12-13 19:11:02.781912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.909  [2024-12-13 19:11:02.781930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.909  [2024-12-13 19:11:02.781943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.910  [2024-12-13 19:11:02.781956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96904 len:8 PRP1 0x0 PRP2 0x0
00:26:45.910  [2024-12-13 19:11:02.781973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.910  [2024-12-13 19:11:02.781989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.910  [2024-12-13 19:11:02.782002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.910  [2024-12-13 19:11:02.782015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96912 len:8 PRP1 0x0 PRP2 0x0
00:26:45.910  [2024-12-13 19:11:02.782031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.910  [2024-12-13 19:11:02.782048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.910  [2024-12-13 19:11:02.782060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.910  [2024-12-13 19:11:02.782074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96920 len:8 PRP1 0x0 PRP2 0x0
00:26:45.910  [2024-12-13 19:11:02.782090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.910  [2024-12-13 19:11:02.782107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.910  [2024-12-13 19:11:02.782120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.910  [2024-12-13 19:11:02.782133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96928 len:8 PRP1 0x0 PRP2 0x0
00:26:45.910  [2024-12-13 19:11:02.782158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.910  [2024-12-13 19:11:02.782176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.910  [2024-12-13 19:11:02.782190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.910  [2024-12-13 19:11:02.782203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96936 len:8 PRP1 0x0 PRP2 0x0
00:26:45.910  [2024-12-13 19:11:02.782229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.910  [2024-12-13 19:11:02.782249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.910  [2024-12-13 19:11:02.782268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.910  [2024-12-13 19:11:02.782282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96944 len:8 PRP1 0x0 PRP2 0x0
00:26:45.910  [2024-12-13 19:11:02.782304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.910  [2024-12-13 19:11:02.782322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.910  [2024-12-13 19:11:02.782334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.910  [2024-12-13 19:11:02.782347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96952 len:8 PRP1 0x0 PRP2 0x0
00:26:45.910  [2024-12-13 19:11:02.782363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.910  [2024-12-13 19:11:02.782380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.910  [2024-12-13 19:11:02.782393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.910  [2024-12-13 19:11:02.782406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96960 len:8 PRP1 0x0 PRP2 0x0
00:26:45.910  [2024-12-13 19:11:02.782423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.910  [2024-12-13 19:11:02.782439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.910  [2024-12-13 19:11:02.782452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.910  [2024-12-13 19:11:02.782465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96968 len:8 PRP1 0x0 PRP2 0x0
00:26:45.910  [2024-12-13 19:11:02.782481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.910  [2024-12-13 19:11:02.794358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.910  [2024-12-13 19:11:02.794404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.910  [2024-12-13 19:11:02.794428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96976 len:8 PRP1 0x0 PRP2 0x0
00:26:45.910  [2024-12-13 19:11:02.794454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.910  [2024-12-13 19:11:02.794479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.910  [2024-12-13 19:11:02.794497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.910  [2024-12-13 19:11:02.794516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96984 len:8 PRP1 0x0 PRP2 0x0
00:26:45.910  [2024-12-13 19:11:02.794547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.910  [2024-12-13 19:11:02.794575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.910  [2024-12-13 19:11:02.794593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.910  [2024-12-13 19:11:02.794631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96992 len:8 PRP1 0x0 PRP2 0x0
00:26:45.910  [2024-12-13 19:11:02.794657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.910  [2024-12-13 19:11:02.794681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.910  [2024-12-13 19:11:02.794708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.910  [2024-12-13 19:11:02.794727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96048 len:8 PRP1 0x0 PRP2 0x0
00:26:45.910  [2024-12-13 19:11:02.794750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.910  [2024-12-13 19:11:02.794773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.910  [2024-12-13 19:11:02.794792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.910  [2024-12-13 19:11:02.794811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96056 len:8 PRP1 0x0 PRP2 0x0
00:26:45.910  [2024-12-13 19:11:02.794835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.910  [2024-12-13 19:11:02.794859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.910  [2024-12-13 19:11:02.794876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.910  [2024-12-13 19:11:02.794895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96064 len:8 PRP1 0x0 PRP2 0x0
00:26:45.910  [2024-12-13 19:11:02.794917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.910  [2024-12-13 19:11:02.794941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.910  [2024-12-13 19:11:02.794959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.910  [2024-12-13 19:11:02.794977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96072 len:8 PRP1 0x0 PRP2 0x0
00:26:45.910  [2024-12-13 19:11:02.794999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.910  [2024-12-13 19:11:02.795023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.910  [2024-12-13 19:11:02.795040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.910  [2024-12-13 19:11:02.795058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96080 len:8 PRP1 0x0 PRP2 0x0
00:26:45.910  [2024-12-13 19:11:02.795081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.910  [2024-12-13 19:11:02.795105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.910  [2024-12-13 19:11:02.795122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.910  [2024-12-13 19:11:02.795140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96088 len:8 PRP1 0x0 PRP2 0x0
00:26:45.910  [2024-12-13 19:11:02.795163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.910  [2024-12-13 19:11:02.795187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.910  [2024-12-13 19:11:02.795204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.910  [2024-12-13 19:11:02.795224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96096 len:8 PRP1 0x0 PRP2 0x0
00:26:45.910  [2024-12-13 19:11:02.795278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.910  [2024-12-13 19:11:02.795319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.910  [2024-12-13 19:11:02.795338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.910  [2024-12-13 19:11:02.795357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96104 len:8 PRP1 0x0 PRP2 0x0
00:26:45.910  [2024-12-13 19:11:02.795379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.910  [2024-12-13 19:11:02.795462] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421
00:26:45.911  [2024-12-13 19:11:02.795559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:45.911  [2024-12-13 19:11:02.795605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:02.795652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:45.911  [2024-12-13 19:11:02.795675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:02.795700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:45.911  [2024-12-13 19:11:02.795722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:02.795747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:45.911  [2024-12-13 19:11:02.795770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:02.795793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state.
00:26:45.911  [2024-12-13 19:11:02.795889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65d670 (9): Bad file descriptor
00:26:45.911  [2024-12-13 19:11:02.801471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller
00:26:45.911  [2024-12-13 19:11:02.829618] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful.
00:26:45.911      10111.50 IOPS,    39.50 MiB/s
[2024-12-13T19:11:17.735Z]     10192.00 IOPS,    39.81 MiB/s
[2024-12-13T19:11:17.735Z]     10259.75 IOPS,    40.08 MiB/s
[2024-12-13T19:11:17.735Z] [2024-12-13 19:11:06.410765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.410815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.410844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.410862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.410881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.410898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.410916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.410932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.410950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.410991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.411967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.411984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.412003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.412020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.412039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.412063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.412083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.911  [2024-12-13 19:11:06.412100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.911  [2024-12-13 19:11:06.412118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.912  [2024-12-13 19:11:06.412135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.412154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.912  [2024-12-13 19:11:06.412171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.412189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.912  [2024-12-13 19:11:06.412220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.412238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.912  [2024-12-13 19:11:06.412272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.412291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.912  [2024-12-13 19:11:06.412325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.412346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.412363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.412381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.412398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.412417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.412433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.412461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.412488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.412508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.412525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.412543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.412560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.412579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.412616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.412633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.412650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.412667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.412684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.412701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.412718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.412736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.412752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.412770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.412786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.412803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.412820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.412838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.412854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.412872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.412888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.412906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.412922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.412947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.412965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.412982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.412999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.413016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.413032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.413050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.413066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.413084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.413100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.413118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.413134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.413153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.413169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.413190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.413206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.413224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.413253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.413273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.413290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.413308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.413324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.413341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.413358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.413376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.413397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.413425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.413442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.413460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.413477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.413494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.413510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.413528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.413544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.413561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.413578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.413595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.413612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.912  [2024-12-13 19:11:06.413629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.912  [2024-12-13 19:11:06.413645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.413664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.413680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.413697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.413769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.413789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.413807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.413825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.413843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.413862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.413879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.413898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.413924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.413945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.413962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.413981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.413998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.414017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.414053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.414089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.414105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.414123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.414139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.414157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.414173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.414191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.414207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.414225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.414257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.414308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.414326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.414345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.414362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.414382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.414399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.414417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.414434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.414453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.414483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.414503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.414520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.414539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.414556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.414575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.414608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.414640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.414657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.414674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.414691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.414708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.414728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.414747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.414762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.414783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.414800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.414817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.414833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.414851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.913  [2024-12-13 19:11:06.414866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.414907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.913  [2024-12-13 19:11:06.414927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6352 len:8 PRP1 0x0 PRP2 0x0
00:26:45.913  [2024-12-13 19:11:06.414943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.414964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.913  [2024-12-13 19:11:06.414977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.913  [2024-12-13 19:11:06.414999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6360 len:8 PRP1 0x0 PRP2 0x0
00:26:45.913  [2024-12-13 19:11:06.415034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.415052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.913  [2024-12-13 19:11:06.415065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.913  [2024-12-13 19:11:06.415077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:8 PRP1 0x0 PRP2 0x0
00:26:45.913  [2024-12-13 19:11:06.415092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.415108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.913  [2024-12-13 19:11:06.415120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.913  [2024-12-13 19:11:06.415132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6376 len:8 PRP1 0x0 PRP2 0x0
00:26:45.913  [2024-12-13 19:11:06.415147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.415164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.913  [2024-12-13 19:11:06.415175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.913  [2024-12-13 19:11:06.415188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6384 len:8 PRP1 0x0 PRP2 0x0
00:26:45.913  [2024-12-13 19:11:06.415203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.415219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.913  [2024-12-13 19:11:06.415248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.913  [2024-12-13 19:11:06.415261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6392 len:8 PRP1 0x0 PRP2 0x0
00:26:45.913  [2024-12-13 19:11:06.415277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.415315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.913  [2024-12-13 19:11:06.415328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.913  [2024-12-13 19:11:06.415341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:8 PRP1 0x0 PRP2 0x0
00:26:45.913  [2024-12-13 19:11:06.415357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.913  [2024-12-13 19:11:06.415373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.914  [2024-12-13 19:11:06.415386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.914  [2024-12-13 19:11:06.415398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6408 len:8 PRP1 0x0 PRP2 0x0
00:26:45.914  [2024-12-13 19:11:06.415414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:06.415435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.914  [2024-12-13 19:11:06.415447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.914  [2024-12-13 19:11:06.415460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6416 len:8 PRP1 0x0 PRP2 0x0
00:26:45.914  [2024-12-13 19:11:06.415475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:06.415500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.914  [2024-12-13 19:11:06.415514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.914  [2024-12-13 19:11:06.415527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6424 len:8 PRP1 0x0 PRP2 0x0
00:26:45.914  [2024-12-13 19:11:06.415549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:06.415566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.914  [2024-12-13 19:11:06.415579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.914  [2024-12-13 19:11:06.415592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:8 PRP1 0x0 PRP2 0x0
00:26:45.914  [2024-12-13 19:11:06.415608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:06.415639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.914  [2024-12-13 19:11:06.415651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.914  [2024-12-13 19:11:06.415663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6440 len:8 PRP1 0x0 PRP2 0x0
00:26:45.914  [2024-12-13 19:11:06.415679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:06.415695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.914  [2024-12-13 19:11:06.415707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.914  [2024-12-13 19:11:06.415719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6448 len:8 PRP1 0x0 PRP2 0x0
00:26:45.914  [2024-12-13 19:11:06.415734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:06.415750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.914  [2024-12-13 19:11:06.415762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.914  [2024-12-13 19:11:06.415774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6456 len:8 PRP1 0x0 PRP2 0x0
00:26:45.914  [2024-12-13 19:11:06.415790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:06.415810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.914  [2024-12-13 19:11:06.415822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.914  [2024-12-13 19:11:06.415834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:8 PRP1 0x0 PRP2 0x0
00:26:45.914  [2024-12-13 19:11:06.415850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:06.415866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.914  [2024-12-13 19:11:06.415878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.914  [2024-12-13 19:11:06.415890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6472 len:8 PRP1 0x0 PRP2 0x0
00:26:45.914  [2024-12-13 19:11:06.415904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:06.415921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.914  [2024-12-13 19:11:06.415933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.914  [2024-12-13 19:11:06.415945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6480 len:8 PRP1 0x0 PRP2 0x0
00:26:45.914  [2024-12-13 19:11:06.415968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:06.415984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.914  [2024-12-13 19:11:06.415996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.914  [2024-12-13 19:11:06.416009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6488 len:8 PRP1 0x0 PRP2 0x0
00:26:45.914  [2024-12-13 19:11:06.416031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:06.428214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.914  [2024-12-13 19:11:06.428294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.914  [2024-12-13 19:11:06.428313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:8 PRP1 0x0 PRP2 0x0
00:26:45.914  [2024-12-13 19:11:06.428332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:06.428350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.914  [2024-12-13 19:11:06.428364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.914  [2024-12-13 19:11:06.428378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6504 len:8 PRP1 0x0 PRP2 0x0
00:26:45.914  [2024-12-13 19:11:06.428395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:06.428412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.914  [2024-12-13 19:11:06.428426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.914  [2024-12-13 19:11:06.428439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6512 len:8 PRP1 0x0 PRP2 0x0
00:26:45.914  [2024-12-13 19:11:06.428456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:06.428474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.914  [2024-12-13 19:11:06.428487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.914  [2024-12-13 19:11:06.428501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6520 len:8 PRP1 0x0 PRP2 0x0
00:26:45.914  [2024-12-13 19:11:06.428518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:06.428536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.914  [2024-12-13 19:11:06.428549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.914  [2024-12-13 19:11:06.428562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5816 len:8 PRP1 0x0 PRP2 0x0
00:26:45.914  [2024-12-13 19:11:06.428611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:06.428643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.914  [2024-12-13 19:11:06.428655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.914  [2024-12-13 19:11:06.428667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:8 PRP1 0x0 PRP2 0x0
00:26:45.914  [2024-12-13 19:11:06.428682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:06.428759] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422
00:26:45.914  [2024-12-13 19:11:06.428829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:45.914  [2024-12-13 19:11:06.428869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:06.428890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:45.914  [2024-12-13 19:11:06.428906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:06.428923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:45.914  [2024-12-13 19:11:06.428938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:06.428955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:45.914  [2024-12-13 19:11:06.428971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:06.428987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state.
00:26:45.914  [2024-12-13 19:11:06.429027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65d670 (9): Bad file descriptor
00:26:45.914  [2024-12-13 19:11:06.434415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller
00:26:45.914  [2024-12-13 19:11:06.461016] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful.
00:26:45.914      10103.40 IOPS,    39.47 MiB/s
[2024-12-13T19:11:17.738Z]     10164.67 IOPS,    39.71 MiB/s
[2024-12-13T19:11:17.738Z]     10171.57 IOPS,    39.73 MiB/s
[2024-12-13T19:11:17.738Z]     10181.00 IOPS,    39.77 MiB/s
[2024-12-13T19:11:17.738Z]     10202.89 IOPS,    39.86 MiB/s
[2024-12-13T19:11:17.738Z] [2024-12-13 19:11:11.011379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:45.914  [2024-12-13 19:11:11.011466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:11.011491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:45.914  [2024-12-13 19:11:11.011509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:11.011527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:45.914  [2024-12-13 19:11:11.011544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:11.011561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:45.914  [2024-12-13 19:11:11.011578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.914  [2024-12-13 19:11:11.011610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65d670 is same with the state(6) to be set
00:26:45.915  [2024-12-13 19:11:11.015572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:124104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.915  [2024-12-13 19:11:11.015606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.015635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:124112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.915  [2024-12-13 19:11:11.015654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.015672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:124120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.915  [2024-12-13 19:11:11.015715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.015736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:124128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.915  [2024-12-13 19:11:11.015753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.015771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:124136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.915  [2024-12-13 19:11:11.015787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.015804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.915  [2024-12-13 19:11:11.015820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.015837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:124152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.915  [2024-12-13 19:11:11.015854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.015873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:124160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.915  [2024-12-13 19:11:11.015900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.015917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:124168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.915  [2024-12-13 19:11:11.015933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.015950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:124176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.915  [2024-12-13 19:11:11.015966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.015984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.915  [2024-12-13 19:11:11.016000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:124192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.915  [2024-12-13 19:11:11.016033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:124200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.915  [2024-12-13 19:11:11.016079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.915  [2024-12-13 19:11:11.016112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:124216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.915  [2024-12-13 19:11:11.016146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.915  [2024-12-13 19:11:11.016192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.915  [2024-12-13 19:11:11.016227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:124240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.915  [2024-12-13 19:11:11.016299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.915  [2024-12-13 19:11:11.016334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:124704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.915  [2024-12-13 19:11:11.016370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:124712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.915  [2024-12-13 19:11:11.016405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:124720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.915  [2024-12-13 19:11:11.016439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.915  [2024-12-13 19:11:11.016474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:124736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.915  [2024-12-13 19:11:11.016509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:124744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.915  [2024-12-13 19:11:11.016543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.915  [2024-12-13 19:11:11.016578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.915  [2024-12-13 19:11:11.016612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:124768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.915  [2024-12-13 19:11:11.016673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:124776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.915  [2024-12-13 19:11:11.016709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:124784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.915  [2024-12-13 19:11:11.016742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.915  [2024-12-13 19:11:11.016777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.915  [2024-12-13 19:11:11.016811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:124808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.915  [2024-12-13 19:11:11.016844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:124816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.915  [2024-12-13 19:11:11.016878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.915  [2024-12-13 19:11:11.016896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.915  [2024-12-13 19:11:11.016911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.016929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:124832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.916  [2024-12-13 19:11:11.016946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.016963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:124840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.916  [2024-12-13 19:11:11.016979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.016997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:124848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.916  [2024-12-13 19:11:11.017014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:124256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:124272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:124280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:124288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:124296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:124304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:124312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:124328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:124368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:124400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:124408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:124432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.017980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.017998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.018015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.018049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.018066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.018083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.018099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.018128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.018145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.018163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:124472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.018180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.018204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:124480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.018221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.018238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.018269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.018289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.018312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.018330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.018345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.018363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.018379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.018396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.018412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.018430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:124528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.916  [2024-12-13 19:11:11.018446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.916  [2024-12-13 19:11:11.018464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.917  [2024-12-13 19:11:11.018480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.018498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:124544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.917  [2024-12-13 19:11:11.018514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.018531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:124552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.917  [2024-12-13 19:11:11.018547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.018565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:124560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.917  [2024-12-13 19:11:11.018581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.018609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.917  [2024-12-13 19:11:11.018626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.018644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.917  [2024-12-13 19:11:11.018660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.018678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.917  [2024-12-13 19:11:11.018694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.018712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:124592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.917  [2024-12-13 19:11:11.018727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.018745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.917  [2024-12-13 19:11:11.018761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.018779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.917  [2024-12-13 19:11:11.018795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.018813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:124616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.917  [2024-12-13 19:11:11.018829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.018846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.917  [2024-12-13 19:11:11.018862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.018880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.917  [2024-12-13 19:11:11.018896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.018914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.917  [2024-12-13 19:11:11.018929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.018947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.917  [2024-12-13 19:11:11.018962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.018980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.917  [2024-12-13 19:11:11.018996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.917  [2024-12-13 19:11:11.019038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:124672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.917  [2024-12-13 19:11:11.019073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.917  [2024-12-13 19:11:11.019116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:124688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:45.917  [2024-12-13 19:11:11.019151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.917  [2024-12-13 19:11:11.019194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.917  [2024-12-13 19:11:11.019243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.917  [2024-12-13 19:11:11.019279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.917  [2024-12-13 19:11:11.019313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.917  [2024-12-13 19:11:11.019346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.917  [2024-12-13 19:11:11.019380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.917  [2024-12-13 19:11:11.019415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.917  [2024-12-13 19:11:11.019448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.917  [2024-12-13 19:11:11.019482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.917  [2024-12-13 19:11:11.019534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.917  [2024-12-13 19:11:11.019568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.917  [2024-12-13 19:11:11.019602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.917  [2024-12-13 19:11:11.019635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.917  [2024-12-13 19:11:11.019669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.917  [2024-12-13 19:11:11.019708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.917  [2024-12-13 19:11:11.019742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.917  [2024-12-13 19:11:11.019782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.917  [2024-12-13 19:11:11.019816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.917  [2024-12-13 19:11:11.019850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.917  [2024-12-13 19:11:11.019883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.917  [2024-12-13 19:11:11.019901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.918  [2024-12-13 19:11:11.019917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.918  [2024-12-13 19:11:11.019943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.918  [2024-12-13 19:11:11.019969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.918  [2024-12-13 19:11:11.019988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.918  [2024-12-13 19:11:11.020004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.918  [2024-12-13 19:11:11.020021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.918  [2024-12-13 19:11:11.020037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.918  [2024-12-13 19:11:11.020055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.918  [2024-12-13 19:11:11.020071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.918  [2024-12-13 19:11:11.020089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.918  [2024-12-13 19:11:11.020105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.918  [2024-12-13 19:11:11.020123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.918  [2024-12-13 19:11:11.020139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.918  [2024-12-13 19:11:11.020156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.918  [2024-12-13 19:11:11.020172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.918  [2024-12-13 19:11:11.020192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.918  [2024-12-13 19:11:11.020208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.918  [2024-12-13 19:11:11.020254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.918  [2024-12-13 19:11:11.020272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.918  [2024-12-13 19:11:11.020290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.918  [2024-12-13 19:11:11.020312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.918  [2024-12-13 19:11:11.020339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:26:45.918  [2024-12-13 19:11:11.020355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.918  [2024-12-13 19:11:11.020392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.918  [2024-12-13 19:11:11.020412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125112 len:8 PRP1 0x0 PRP2 0x0
00:26:45.918  [2024-12-13 19:11:11.020429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.918  [2024-12-13 19:11:11.020449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:26:45.918  [2024-12-13 19:11:11.020464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:26:45.918  [2024-12-13 19:11:11.020487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125120 len:8 PRP1 0x0 PRP2 0x0
00:26:45.918  [2024-12-13 19:11:11.020504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:45.918  [2024-12-13 19:11:11.020568] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420
00:26:45.918  [2024-12-13 19:11:11.020590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state.
00:26:45.918  [2024-12-13 19:11:11.024410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller
00:26:45.918  [2024-12-13 19:11:11.024457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65d670 (9): Bad file descriptor
00:26:45.918  [2024-12-13 19:11:11.051922] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful.
00:26:45.918       9937.90 IOPS,    38.82 MiB/s
[2024-12-13T19:11:17.742Z]      9762.18 IOPS,    38.13 MiB/s
[2024-12-13T19:11:17.742Z]      9672.42 IOPS,    37.78 MiB/s
[2024-12-13T19:11:17.742Z]      9730.23 IOPS,    38.01 MiB/s
[2024-12-13T19:11:17.742Z]      9784.29 IOPS,    38.22 MiB/s
[2024-12-13T19:11:17.742Z]      9834.93 IOPS,    38.42 MiB/s
00:26:45.918                                                                                                  Latency(us)
00:26:45.918  
[2024-12-13T19:11:17.742Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:26:45.918  Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:26:45.918  	 Verification LBA range: start 0x0 length 0x4000
00:26:45.918  	 NVMe0n1             :      15.01    9835.88      38.42     227.96     0.00   12689.88     573.44   50760.61
00:26:45.918  
[2024-12-13T19:11:17.742Z]  ===================================================================================================================
00:26:45.918  
[2024-12-13T19:11:17.742Z]  Total                       :               9835.88      38.42     227.96     0.00   12689.88     573.44   50760.61
00:26:45.918  Received shutdown signal, test time was about 15.000000 seconds
00:26:45.918  
00:26:45.918                                                                                                  Latency(us)
00:26:45.918  
[2024-12-13T19:11:17.742Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:26:45.918  
[2024-12-13T19:11:17.742Z]  ===================================================================================================================
00:26:45.918  
[2024-12-13T19:11:17.742Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:26:45.918    19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful'
00:26:45.918   19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3
00:26:45.918   19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 ))
00:26:45.918   19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f
00:26:45.918   19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=110706
00:26:45.918   19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 110706 /var/tmp/bdevperf.sock
00:26:45.918   19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 110706 ']'
00:26:45.918   19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:26:45.918   19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100
00:26:45.918   19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:26:45.918  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:26:45.918   19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable
00:26:45.918   19:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:26:45.918   19:11:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:26:45.918   19:11:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0
00:26:45.918   19:11:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421
00:26:45.918  [2024-12-13 19:11:17.530344] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 ***
00:26:45.918   19:11:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422
00:26:46.177  [2024-12-13 19:11:17.826534] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 ***
00:26:46.177   19:11:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:26:46.436  NVMe0n1
00:26:46.436   19:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:26:46.694  
00:26:46.694   19:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:26:46.953  
00:26:46.953   19:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:26:46.953   19:11:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0
00:26:47.211   19:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:26:47.470   19:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3
00:26:50.757   19:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:26:50.757   19:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0
00:26:50.757   19:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=110830
00:26:50.757   19:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:26:50.757   19:11:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 110830
00:26:52.133  {
00:26:52.133    "results": [
00:26:52.133      {
00:26:52.133        "job": "NVMe0n1",
00:26:52.133        "core_mask": "0x1",
00:26:52.133        "workload": "verify",
00:26:52.133        "status": "finished",
00:26:52.133        "verify_range": {
00:26:52.133          "start": 0,
00:26:52.133          "length": 16384
00:26:52.133        },
00:26:52.133        "queue_depth": 128,
00:26:52.133        "io_size": 4096,
00:26:52.133        "runtime": 1.006437,
00:26:52.133        "iops": 10488.485618076442,
00:26:52.133        "mibps": 40.9706469456111,
00:26:52.133        "io_failed": 0,
00:26:52.133        "io_timeout": 0,
00:26:52.133        "avg_latency_us": 12142.017298563505,
00:26:52.133        "min_latency_us": 1891.6072727272726,
00:26:52.133        "max_latency_us": 15609.483636363637
00:26:52.133      }
00:26:52.133    ],
00:26:52.133    "core_count": 1
00:26:52.133  }
00:26:52.133   19:11:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt
00:26:52.133  [2024-12-13 19:11:16.964248] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:26:52.133  [2024-12-13 19:11:16.964381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110706 ]
00:26:52.133  [2024-12-13 19:11:17.104917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:52.133  [2024-12-13 19:11:17.139367] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:26:52.133  [2024-12-13 19:11:19.244484] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421
00:26:52.133  [2024-12-13 19:11:19.244682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:52.133  [2024-12-13 19:11:19.244711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:52.133  [2024-12-13 19:11:19.244731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:52.134  [2024-12-13 19:11:19.244747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:52.134  [2024-12-13 19:11:19.244763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:52.134  [2024-12-13 19:11:19.244778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:52.134  [2024-12-13 19:11:19.244794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:52.134  [2024-12-13 19:11:19.244809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:52.134  [2024-12-13 19:11:19.244824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state.
00:26:52.134  [2024-12-13 19:11:19.244892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller
00:26:52.134  [2024-12-13 19:11:19.244929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e4670 (9): Bad file descriptor
00:26:52.134  [2024-12-13 19:11:19.256483] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful.
00:26:52.134  Running I/O for 1 seconds...
00:26:52.134      10428.00 IOPS,    40.73 MiB/s
00:26:52.134                                                                                                  Latency(us)
00:26:52.134  
[2024-12-13T19:11:23.958Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:26:52.134  Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:26:52.134  	 Verification LBA range: start 0x0 length 0x4000
00:26:52.134  	 NVMe0n1             :       1.01   10488.49      40.97       0.00     0.00   12142.02    1891.61   15609.48
00:26:52.134  
[2024-12-13T19:11:23.958Z]  ===================================================================================================================
00:26:52.134  
[2024-12-13T19:11:23.958Z]  Total                       :              10488.49      40.97       0.00     0.00   12142.02    1891.61   15609.48
00:26:52.134   19:11:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:26:52.134   19:11:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0
00:26:52.134   19:11:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:26:52.392   19:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:26:52.392   19:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0
00:26:52.987   19:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:26:52.987   19:11:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3
00:26:56.271   19:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:26:56.271   19:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0
00:26:56.530   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 110706
00:26:56.530   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 110706 ']'
00:26:56.530   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 110706
00:26:56.530    19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname
00:26:56.530   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:26:56.530    19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110706
00:26:56.530   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:26:56.530   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:26:56.530  killing process with pid 110706
00:26:56.530   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110706'
00:26:56.530   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 110706
00:26:56.530   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 110706
00:26:56.530   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync
00:26:56.789   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:26:57.048   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT
00:26:57.048   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt
00:26:57.048   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini
00:26:57.048   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup
00:26:57.048   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync
00:26:57.048   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:26:57.048   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e
00:26:57.048   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20}
00:26:57.048   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:26:57.048  rmmod nvme_tcp
00:26:57.048  rmmod nvme_fabrics
00:26:57.048  rmmod nvme_keyring
00:26:57.048   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:26:57.048   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e
00:26:57.048   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0
00:26:57.048   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 110349 ']'
00:26:57.048   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 110349
00:26:57.048   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 110349 ']'
00:26:57.048   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 110349
00:26:57.048    19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname
00:26:57.048   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:26:57.048    19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110349
00:26:57.048   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:26:57.048   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:26:57.048  killing process with pid 110349
00:26:57.048   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110349'
00:26:57.048   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 110349
00:26:57.048   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 110349
00:26:57.307   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:26:57.307   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:26:57.307   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:26:57.307   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr
00:26:57.307   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save
00:26:57.307   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:26:57.307   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore
00:26:57.307   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:26:57.307   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:26:57.307   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:26:57.307   19:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:26:57.307   19:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:26:57.307   19:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:26:57.307   19:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:26:57.307   19:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:26:57.307   19:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:26:57.307   19:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:26:57.307   19:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:26:57.307   19:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:26:57.307   19:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:26:57.566   19:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:26:57.566   19:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:26:57.566   19:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns
00:26:57.566   19:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:57.566   19:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:57.566    19:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:57.566   19:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0
00:26:57.566  ************************************
00:26:57.566  END TEST nvmf_failover
00:26:57.566  ************************************
00:26:57.566  
00:26:57.566  real	0m33.059s
00:26:57.566  user	2m7.861s
00:26:57.566  sys	0m4.724s
00:26:57.566   19:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable
00:26:57.566   19:11:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:26:57.566   19:11:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp
00:26:57.566   19:11:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:26:57.566   19:11:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:26:57.566   19:11:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:26:57.566  ************************************
00:26:57.566  START TEST nvmf_host_discovery
00:26:57.566  ************************************
00:26:57.566   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp
00:26:57.566  * Looking for test storage...
00:26:57.566  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:26:57.566    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:26:57.566     19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version
00:26:57.566     19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-:
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-:
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<'
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 ))
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:26:57.826     19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1
00:26:57.826     19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1
00:26:57.826     19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:26:57.826     19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1
00:26:57.826     19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2
00:26:57.826     19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2
00:26:57.826     19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:26:57.826     19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:26:57.826  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:57.826  		--rc genhtml_branch_coverage=1
00:26:57.826  		--rc genhtml_function_coverage=1
00:26:57.826  		--rc genhtml_legend=1
00:26:57.826  		--rc geninfo_all_blocks=1
00:26:57.826  		--rc geninfo_unexecuted_blocks=1
00:26:57.826  		
00:26:57.826  		'
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:26:57.826  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:57.826  		--rc genhtml_branch_coverage=1
00:26:57.826  		--rc genhtml_function_coverage=1
00:26:57.826  		--rc genhtml_legend=1
00:26:57.826  		--rc geninfo_all_blocks=1
00:26:57.826  		--rc geninfo_unexecuted_blocks=1
00:26:57.826  		
00:26:57.826  		'
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:26:57.826  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:57.826  		--rc genhtml_branch_coverage=1
00:26:57.826  		--rc genhtml_function_coverage=1
00:26:57.826  		--rc genhtml_legend=1
00:26:57.826  		--rc geninfo_all_blocks=1
00:26:57.826  		--rc geninfo_unexecuted_blocks=1
00:26:57.826  		
00:26:57.826  		'
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:26:57.826  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:57.826  		--rc genhtml_branch_coverage=1
00:26:57.826  		--rc genhtml_function_coverage=1
00:26:57.826  		--rc genhtml_legend=1
00:26:57.826  		--rc geninfo_all_blocks=1
00:26:57.826  		--rc geninfo_unexecuted_blocks=1
00:26:57.826  		
00:26:57.826  		'
00:26:57.826   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:26:57.826     19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:26:57.826     19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:26:57.826    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:26:57.826     19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob
00:26:57.826     19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:26:57.826     19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:26:57.826     19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:26:57.826      19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:57.826      19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:57.827      19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:57.827      19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH
00:26:57.827      19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:57.827    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0
00:26:57.827    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:26:57.827    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:26:57.827    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:26:57.827    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:26:57.827    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:26:57.827    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:26:57.827  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:26:57.827    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:26:57.827    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:26:57.827    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']'
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:57.827    19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:26:57.827  Cannot find device "nvmf_init_br"
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:26:57.827  Cannot find device "nvmf_init_br2"
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:26:57.827  Cannot find device "nvmf_tgt_br"
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:26:57.827  Cannot find device "nvmf_tgt_br2"
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:26:57.827  Cannot find device "nvmf_init_br"
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:26:57.827  Cannot find device "nvmf_init_br2"
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:26:57.827  Cannot find device "nvmf_tgt_br"
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:26:57.827  Cannot find device "nvmf_tgt_br2"
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:26:57.827  Cannot find device "nvmf_br"
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:26:57.827  Cannot find device "nvmf_init_if"
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:26:57.827  Cannot find device "nvmf_init_if2"
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:26:57.827  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:26:57.827  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:26:57.827   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:26:58.087  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:26:58.087  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms
00:26:58.087  
00:26:58.087  --- 10.0.0.3 ping statistics ---
00:26:58.087  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:58.087  rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:26:58.087  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:26:58.087  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms
00:26:58.087  
00:26:58.087  --- 10.0.0.4 ping statistics ---
00:26:58.087  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:58.087  rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:26:58.087  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:26:58.087  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms
00:26:58.087  
00:26:58.087  --- 10.0.0.1 ping statistics ---
00:26:58.087  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:58.087  rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:26:58.087  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:26:58.087  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms
00:26:58.087  
00:26:58.087  --- 10.0.0.2 ping statistics ---
00:26:58.087  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:58.087  rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=111189
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 111189
00:26:58.087   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:26:58.088   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 111189 ']'
00:26:58.088   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:26:58.088   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:26:58.088  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:26:58.088   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:26:58.088   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:26:58.088   19:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:58.346  [2024-12-13 19:11:29.917182] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:26:58.346  [2024-12-13 19:11:29.917295] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:26:58.346  [2024-12-13 19:11:30.060088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:58.346  [2024-12-13 19:11:30.095324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:26:58.346  [2024-12-13 19:11:30.095399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:26:58.346  [2024-12-13 19:11:30.095426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:26:58.346  [2024-12-13 19:11:30.095434] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:26:58.346  [2024-12-13 19:11:30.095441] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:26:58.346  [2024-12-13 19:11:30.095836] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:26:58.604   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:26:58.604   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0
00:26:58.604   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:26:58.604   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable
00:26:58.604   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:58.604   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:26:58.604   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:26:58.604   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:58.604   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:58.605  [2024-12-13 19:11:30.260140] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:58.605  [2024-12-13 19:11:30.268329] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 ***
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:58.605  null0
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:58.605  null1
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=111226
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 111226 /tmp/host.sock
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 111226 ']'
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:26:58.605  Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...'
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:26:58.605   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:58.605  [2024-12-13 19:11:30.365648] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:26:58.605  [2024-12-13 19:11:30.365789] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111226 ]
00:26:58.863  [2024-12-13 19:11:30.517837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:58.863  [2024-12-13 19:11:30.558728] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:26:58.863   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:26:58.863   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0
00:26:58.863   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:26:58.863   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme
00:26:58.863   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:58.863   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:59.122   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:59.122   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test
00:26:59.122   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:59.122   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:59.122   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:59.122   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:59.122   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]]
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:59.122   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]]
00:26:59.122   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0
00:26:59.122   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:59.122   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:59.122   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:26:59.122    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:59.123   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]]
00:26:59.123    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list
00:26:59.123    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:26:59.123    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:26:59.123    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:59.123    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:59.123    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:26:59.123    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:26:59.123    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:59.123   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]]
00:26:59.123   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0
00:26:59.123   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:59.123   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:59.382   19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:59.382    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names
00:26:59.382    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:26:59.382    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:26:59.382    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:59.382    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:26:59.382    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:59.382    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:26:59.382    19:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:59.382   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]]
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:59.382   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]]
00:26:59.382   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420
00:26:59.382   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:59.382   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:59.382  [2024-12-13 19:11:31.064457] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:26:59.382   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:59.382   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]]
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:59.382   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]]
00:26:59.382   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0
00:26:59.382   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0
00:26:59.382   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))'
00:26:59.382   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))'
00:26:59.382   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:26:59.382   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:26:59.382   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))'
00:26:59.382    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count
00:26:59.382     19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0
00:26:59.382     19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length'
00:26:59.382     19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:59.382     19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:59.382     19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:59.641    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0
00:26:59.641    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0
00:26:59.641    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count ))
00:26:59.641   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:26:59.641   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test
00:26:59.641   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:59.641   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:59.641   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:59.641   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]'
00:26:59.641   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]'
00:26:59.641   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:26:59.641   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:26:59.641   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]'
00:26:59.641     19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names
00:26:59.641     19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:26:59.641     19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:26:59.641     19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:59.641     19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:26:59.641     19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:26:59.641     19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:26:59.641     19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:59.641    19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]]
00:26:59.641   19:11:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1
00:26:59.900  [2024-12-13 19:11:31.703024] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached
00:26:59.900  [2024-12-13 19:11:31.703072] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected
00:26:59.900  [2024-12-13 19:11:31.703091] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command
00:27:00.158  [2024-12-13 19:11:31.789124] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0
00:27:00.158  [2024-12-13 19:11:31.843458] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420
00:27:00.158  [2024-12-13 19:11:31.844284] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x18fabe0:1 started.
00:27:00.158  [2024-12-13 19:11:31.846111] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done
00:27:00.158  [2024-12-13 19:11:31.846138] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again
00:27:00.158  [2024-12-13 19:11:31.851513] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x18fabe0 was disconnected and freed. delete nvme_qpair.
00:27:00.726   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:27:00.726   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]'
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:00.726    19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:27:00.726   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:27:00.726   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]'
00:27:00.726   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]'
00:27:00.726   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:27:00.726   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:27:00.726   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]'
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:00.726    19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]]
00:27:00.726   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:27:00.726   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]'
00:27:00.726   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]'
00:27:00.726   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:27:00.726   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:27:00.726   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]'
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:00.726    19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]]
00:27:00.726   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:27:00.726   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1
00:27:00.726   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1
00:27:00.726   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))'
00:27:00.726   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))'
00:27:00.726   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:27:00.726   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:27:00.726   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))'
00:27:00.726    19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length'
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:00.726     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:00.727     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:00.727    19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1
00:27:00.727    19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1
00:27:00.727    19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count ))
00:27:00.727   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:27:00.727   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1
00:27:00.727   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:00.727   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:00.727  [2024-12-13 19:11:32.524786] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x18fb0c0:1 started.
00:27:00.727   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:00.727   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]'
00:27:00.727   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]'
00:27:00.727   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:27:00.727   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:27:00.727   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]'
00:27:00.727     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list
00:27:00.727     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:27:00.727     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:27:00.727     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:27:00.727     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:00.727     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:00.727  [2024-12-13 19:11:32.531761] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x18fb0c0 was disconnected and freed. delete nvme_qpair.
00:27:00.727     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:27:00.986     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:00.986    19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]]
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))'
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))'
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))'
00:27:00.986    19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count
00:27:00.986     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1
00:27:00.986     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length'
00:27:00.986     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:00.986     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:00.986     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:00.986    19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1
00:27:00.986    19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2
00:27:00.986    19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count ))
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:00.986  [2024-12-13 19:11:32.641356] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 ***
00:27:00.986  [2024-12-13 19:11:32.642437] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer
00:27:00.986  [2024-12-13 19:11:32.642470] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]'
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]'
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]'
00:27:00.986     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names
00:27:00.986     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:27:00.986     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:27:00.986     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:00.986     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:00.986     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:27:00.986     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:27:00.986     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:00.986    19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]'
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]'
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:27:00.986   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]'
00:27:00.986     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list
00:27:00.986     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:27:00.986     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:27:00.986     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:27:00.986     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:27:00.986     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:00.986     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:00.986  [2024-12-13 19:11:32.729042] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0
00:27:00.987     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:00.987    19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]]
00:27:00.987   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:27:00.987   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]'
00:27:00.987   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]'
00:27:00.987   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:27:00.987   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:27:00.987   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]'
00:27:00.987     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0
00:27:00.987     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:27:00.987     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0
00:27:00.987     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n
00:27:00.987     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:00.987     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs
00:27:00.987     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:00.987     19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:00.987  [2024-12-13 19:11:32.787411] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421
00:27:00.987  [2024-12-13 19:11:32.787476] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done
00:27:00.987  [2024-12-13 19:11:32.787487] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again
00:27:00.987  [2024-12-13 19:11:32.787492] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again
00:27:01.245    19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]]
00:27:01.246   19:11:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1
00:27:02.182   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:27:02.182   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]'
00:27:02.182     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0
00:27:02.182     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0
00:27:02.182     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:02.182     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:27:02.182     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:02.182     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs
00:27:02.182     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n
00:27:02.182     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:02.182    19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]]
00:27:02.182   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:27:02.182   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0
00:27:02.182   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0
00:27:02.182   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))'
00:27:02.182   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))'
00:27:02.182   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:27:02.182   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:27:02.182   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))'
00:27:02.182    19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count
00:27:02.182     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2
00:27:02.182     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:02.182     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:02.182     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length'
00:27:02.182     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:02.182    19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0
00:27:02.182    19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2
00:27:02.182    19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count ))
00:27:02.182   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:27:02.182   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420
00:27:02.182   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:02.182   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:02.182  [2024-12-13 19:11:33.930536] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer
00:27:02.182  [2024-12-13 19:11:33.930620] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command
00:27:02.182  [2024-12-13 19:11:33.933768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:02.182  [2024-12-13 19:11:33.933824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:02.182  [2024-12-13 19:11:33.933854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:02.182  [2024-12-13 19:11:33.933865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:02.182  [2024-12-13 19:11:33.933875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:02.182  [2024-12-13 19:11:33.933885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:02.182  [2024-12-13 19:11:33.933895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:02.182  [2024-12-13 19:11:33.933916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:02.182  [2024-12-13 19:11:33.933926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb590 is same with the state(6) to be set
00:27:02.182   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:02.182   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]'
00:27:02.182   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]'
00:27:02.182   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:27:02.182   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:27:02.182   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]'
00:27:02.182     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names
00:27:02.182     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:27:02.182     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:27:02.182     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:02.182     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:27:02.182     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:02.182     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:27:02.182  [2024-12-13 19:11:33.943705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cb590 (9): Bad file descriptor
00:27:02.182  [2024-12-13 19:11:33.953746] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:27:02.182  [2024-12-13 19:11:33.953770] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:27:02.182  [2024-12-13 19:11:33.953778] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:27:02.182  [2024-12-13 19:11:33.953784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:27:02.182  [2024-12-13 19:11:33.953815] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:27:02.182  [2024-12-13 19:11:33.953887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:27:02.182  [2024-12-13 19:11:33.953924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cb590 with addr=10.0.0.3, port=4420
00:27:02.182  [2024-12-13 19:11:33.953936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb590 is same with the state(6) to be set
00:27:02.182  [2024-12-13 19:11:33.953953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cb590 (9): Bad file descriptor
00:27:02.182  [2024-12-13 19:11:33.953969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:27:02.182  [2024-12-13 19:11:33.953978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:27:02.182  [2024-12-13 19:11:33.953989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:27:02.182  [2024-12-13 19:11:33.953998] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:27:02.182  [2024-12-13 19:11:33.954004] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:27:02.182  [2024-12-13 19:11:33.954010] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:27:02.182     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:02.182  [2024-12-13 19:11:33.963821] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:27:02.182  [2024-12-13 19:11:33.963860] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:27:02.182  [2024-12-13 19:11:33.963866] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:27:02.182  [2024-12-13 19:11:33.963871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:27:02.182  [2024-12-13 19:11:33.963913] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:27:02.182  [2024-12-13 19:11:33.963963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:27:02.182  [2024-12-13 19:11:33.963982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cb590 with addr=10.0.0.3, port=4420
00:27:02.183  [2024-12-13 19:11:33.963992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb590 is same with the state(6) to be set
00:27:02.183  [2024-12-13 19:11:33.964007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cb590 (9): Bad file descriptor
00:27:02.183  [2024-12-13 19:11:33.964020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:27:02.183  [2024-12-13 19:11:33.964029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:27:02.183  [2024-12-13 19:11:33.964053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:27:02.183  [2024-12-13 19:11:33.964077] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:27:02.183  [2024-12-13 19:11:33.964082] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:27:02.183  [2024-12-13 19:11:33.964087] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:27:02.183  [2024-12-13 19:11:33.973923] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:27:02.183  [2024-12-13 19:11:33.973965] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:27:02.183  [2024-12-13 19:11:33.973971] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:27:02.183  [2024-12-13 19:11:33.973976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:27:02.183  [2024-12-13 19:11:33.974020] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:27:02.183  [2024-12-13 19:11:33.974084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:27:02.183  [2024-12-13 19:11:33.974103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cb590 with addr=10.0.0.3, port=4420
00:27:02.183  [2024-12-13 19:11:33.974113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb590 is same with the state(6) to be set
00:27:02.183  [2024-12-13 19:11:33.974128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cb590 (9): Bad file descriptor
00:27:02.183  [2024-12-13 19:11:33.974141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:27:02.183  [2024-12-13 19:11:33.974149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:27:02.183  [2024-12-13 19:11:33.974174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:27:02.183  [2024-12-13 19:11:33.974197] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:27:02.183  [2024-12-13 19:11:33.974203] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:27:02.183  [2024-12-13 19:11:33.974207] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:27:02.183  [2024-12-13 19:11:33.984029] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:27:02.183  [2024-12-13 19:11:33.984071] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:27:02.183  [2024-12-13 19:11:33.984077] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:27:02.183  [2024-12-13 19:11:33.984097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:27:02.183  [2024-12-13 19:11:33.984140] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:27:02.183  [2024-12-13 19:11:33.984189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:27:02.183  [2024-12-13 19:11:33.984208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cb590 with addr=10.0.0.3, port=4420
00:27:02.183  [2024-12-13 19:11:33.984218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb590 is same with the state(6) to be set
00:27:02.183  [2024-12-13 19:11:33.984233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cb590 (9): Bad file descriptor
00:27:02.183  [2024-12-13 19:11:33.984259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:27:02.183  [2024-12-13 19:11:33.984268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:27:02.183  [2024-12-13 19:11:33.984276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:27:02.183  [2024-12-13 19:11:33.984284] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:27:02.183  [2024-12-13 19:11:33.984290] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:27:02.183  [2024-12-13 19:11:33.984294] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:27:02.183    19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:27:02.183   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:27:02.183   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]'
00:27:02.183   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]'
00:27:02.183   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:27:02.183   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:27:02.183   19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]'
00:27:02.183  [2024-12-13 19:11:33.994148] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:27:02.183  [2024-12-13 19:11:33.994186] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:27:02.183  [2024-12-13 19:11:33.994192] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:27:02.183  [2024-12-13 19:11:33.994196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:27:02.183  [2024-12-13 19:11:33.994261] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:27:02.183  [2024-12-13 19:11:33.994315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:27:02.183  [2024-12-13 19:11:33.994336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cb590 with addr=10.0.0.3, port=4420
00:27:02.183  [2024-12-13 19:11:33.994347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb590 is same with the state(6) to be set
00:27:02.183  [2024-12-13 19:11:33.994363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cb590 (9): Bad file descriptor
00:27:02.183  [2024-12-13 19:11:33.994377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:27:02.183  [2024-12-13 19:11:33.994386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:27:02.183  [2024-12-13 19:11:33.994395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:27:02.183  [2024-12-13 19:11:33.994404] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:27:02.183  [2024-12-13 19:11:33.994409] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:27:02.183  [2024-12-13 19:11:33.994414] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:27:02.183     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list
00:27:02.183     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:27:02.183     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:27:02.183     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:27:02.183     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:27:02.183     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:02.183     19:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:02.442  [2024-12-13 19:11:34.004270] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:27:02.442  [2024-12-13 19:11:34.004322] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:27:02.442  [2024-12-13 19:11:34.004329] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:27:02.442  [2024-12-13 19:11:34.004335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:27:02.442  [2024-12-13 19:11:34.004362] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:27:02.442  [2024-12-13 19:11:34.004416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:27:02.442  [2024-12-13 19:11:34.004437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cb590 with addr=10.0.0.3, port=4420
00:27:02.442  [2024-12-13 19:11:34.004448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb590 is same with the state(6) to be set
00:27:02.442  [2024-12-13 19:11:34.004464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cb590 (9): Bad file descriptor
00:27:02.442  [2024-12-13 19:11:34.004478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:27:02.442  [2024-12-13 19:11:34.004487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:27:02.442  [2024-12-13 19:11:34.004496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:27:02.442  [2024-12-13 19:11:34.004504] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:27:02.442  [2024-12-13 19:11:34.004510] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:27:02.442  [2024-12-13 19:11:34.004515] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:27:02.442  [2024-12-13 19:11:34.014372] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:27:02.442  [2024-12-13 19:11:34.014395] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:27:02.442  [2024-12-13 19:11:34.014402] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:27:02.442  [2024-12-13 19:11:34.014407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:27:02.442  [2024-12-13 19:11:34.014450] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:27:02.442  [2024-12-13 19:11:34.014501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:27:02.442  [2024-12-13 19:11:34.014521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cb590 with addr=10.0.0.3, port=4420
00:27:02.442  [2024-12-13 19:11:34.014531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb590 is same with the state(6) to be set
00:27:02.442  [2024-12-13 19:11:34.014562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cb590 (9): Bad file descriptor
00:27:02.442  [2024-12-13 19:11:34.014577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:27:02.442  [2024-12-13 19:11:34.014600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:27:02.442  [2024-12-13 19:11:34.014609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:27:02.442  [2024-12-13 19:11:34.014618] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:27:02.442  [2024-12-13 19:11:34.014624] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:27:02.442  [2024-12-13 19:11:34.014629] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:27:02.442  [2024-12-13 19:11:34.016767] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found
00:27:02.442  [2024-12-13 19:11:34.016811] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:02.442    19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]]
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]'
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]'
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]'
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:02.442    19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]]
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))'
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))'
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))'
00:27:02.442    19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length'
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:02.442    19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0
00:27:02.442    19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2
00:27:02.442    19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count ))
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]'
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]'
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]'
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:02.442    19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]]
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]'
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]'
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:27:02.442   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]'
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:27:02.442     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:27:02.443     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:02.443     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:27:02.443     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:27:02.443     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:02.443     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:02.701    19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]]
00:27:02.701   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:27:02.701   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2
00:27:02.701   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2
00:27:02.701   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))'
00:27:02.701   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))'
00:27:02.701   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:27:02.701   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:27:02.701   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))'
00:27:02.701    19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count
00:27:02.701     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2
00:27:02.701     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length'
00:27:02.701     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:02.701     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:02.701     19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:02.701    19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2
00:27:02.701    19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4
00:27:02.701    19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count ))
00:27:02.701   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:27:02.701   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:27:02.701   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:02.701   19:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:03.644  [2024-12-13 19:11:35.343838] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached
00:27:03.644  [2024-12-13 19:11:35.343891] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected
00:27:03.644  [2024-12-13 19:11:35.343909] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command
00:27:03.644  [2024-12-13 19:11:35.429906] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0
00:27:03.905  [2024-12-13 19:11:35.488204] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421
00:27:03.905  [2024-12-13 19:11:35.488814] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x18e2f30:1 started.
00:27:03.905  [2024-12-13 19:11:35.490999] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done
00:27:03.905  [2024-12-13 19:11:35.491073] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:03.905  [2024-12-13 19:11:35.492458] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x18e2f30 was disconnected and freed. delete nvme_qpair.
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:03.905  2024/12/13 19:11:35 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists
00:27:03.905  request:
00:27:03.905  {
00:27:03.905  "method": "bdev_nvme_start_discovery",
00:27:03.905  "params": {
00:27:03.905  "name": "nvme",
00:27:03.905  "trtype": "tcp",
00:27:03.905  "traddr": "10.0.0.3",
00:27:03.905  "adrfam": "ipv4",
00:27:03.905  "trsvcid": "8009",
00:27:03.905  "hostnqn": "nqn.2021-12.io.spdk:test",
00:27:03.905  "wait_for_attach": true
00:27:03.905  }
00:27:03.905  }
00:27:03.905  Got JSON-RPC error response
00:27:03.905  GoRPCClient: error on JSON-RPC call
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name'
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]]
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]]
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:03.905  2024/12/13 19:11:35 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists
00:27:03.905  request:
00:27:03.905  {
00:27:03.905  "method": "bdev_nvme_start_discovery",
00:27:03.905  "params": {
00:27:03.905  "name": "nvme_second",
00:27:03.905  "trtype": "tcp",
00:27:03.905  "traddr": "10.0.0.3",
00:27:03.905  "adrfam": "ipv4",
00:27:03.905  "trsvcid": "8009",
00:27:03.905  "hostnqn": "nqn.2021-12.io.spdk:test",
00:27:03.905  "wait_for_attach": true
00:27:03.905  }
00:27:03.905  }
00:27:03.905  Got JSON-RPC error response
00:27:03.905  GoRPCClient: error on JSON-RPC call
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name'
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:03.905   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]]
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:27:03.905    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:04.164    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:04.164   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]]
00:27:04.164   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000
00:27:04.164   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0
00:27:04.164   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000
00:27:04.164   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:27:04.164   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:27:04.164    19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:27:04.164   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:27:04.164   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000
00:27:04.164   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:04.164   19:11:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:05.129  [2024-12-13 19:11:36.743386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:27:05.129  [2024-12-13 19:11:36.743452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e2470 with addr=10.0.0.3, port=8010
00:27:05.129  [2024-12-13 19:11:36.743473] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:27:05.129  [2024-12-13 19:11:36.743483] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:27:05.129  [2024-12-13 19:11:36.743491] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect
00:27:06.065  [2024-12-13 19:11:37.743364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:27:06.065  [2024-12-13 19:11:37.743406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e2470 with addr=10.0.0.3, port=8010
00:27:06.065  [2024-12-13 19:11:37.743431] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:27:06.065  [2024-12-13 19:11:37.743439] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:27:06.065  [2024-12-13 19:11:37.743447] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect
00:27:06.999  [2024-12-13 19:11:38.743293] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr
00:27:06.999  2024/12/13 19:11:38 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out
00:27:06.999  request:
00:27:06.999  {
00:27:06.999  "method": "bdev_nvme_start_discovery",
00:27:06.999  "params": {
00:27:06.999  "name": "nvme_second",
00:27:06.999  "trtype": "tcp",
00:27:07.000  "traddr": "10.0.0.3",
00:27:07.000  "adrfam": "ipv4",
00:27:07.000  "trsvcid": "8010",
00:27:07.000  "hostnqn": "nqn.2021-12.io.spdk:test",
00:27:07.000  "wait_for_attach": false,
00:27:07.000  "attach_timeout_ms": 3000
00:27:07.000  }
00:27:07.000  }
00:27:07.000  Got JSON-RPC error response
00:27:07.000  GoRPCClient: error on JSON-RPC call
00:27:07.000   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:27:07.000   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1
00:27:07.000   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:27:07.000   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:27:07.000   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:27:07.000    19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs
00:27:07.000    19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info
00:27:07.000    19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name'
00:27:07.000    19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort
00:27:07.000    19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:07.000    19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:07.000    19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs
00:27:07.000    19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:07.000   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]]
00:27:07.000   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT
00:27:07.000   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 111226
00:27:07.000   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini
00:27:07.000   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup
00:27:07.000   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync
00:27:07.258   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:27:07.258   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e
00:27:07.258   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20}
00:27:07.258   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:27:07.258  rmmod nvme_tcp
00:27:07.258  rmmod nvme_fabrics
00:27:07.258  rmmod nvme_keyring
00:27:07.258   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:27:07.258   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e
00:27:07.258   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0
00:27:07.258   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 111189 ']'
00:27:07.258   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 111189
00:27:07.258   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 111189 ']'
00:27:07.258   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 111189
00:27:07.258    19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname
00:27:07.258   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:27:07.258    19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111189
00:27:07.258  killing process with pid 111189
00:27:07.258   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:27:07.258   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:27:07.258   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111189'
00:27:07.258   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 111189
00:27:07.258   19:11:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 111189
00:27:07.516   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:27:07.516   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:27:07.516   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:27:07.516   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr
00:27:07.516   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save
00:27:07.516   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore
00:27:07.516   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:27:07.516   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:27:07.516   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:27:07.516   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:27:07.516   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:27:07.516   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:27:07.516   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:27:07.516   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:27:07.516   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:27:07.516   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:27:07.516   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:27:07.516   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:27:07.516   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:27:07.516   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:27:07.516   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:27:07.775   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:27:07.775   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns
00:27:07.775   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:27:07.775   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:27:07.775    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:27:07.775   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0
00:27:07.775  
00:27:07.775  real	0m10.150s
00:27:07.775  user	0m19.733s
00:27:07.775  sys	0m1.635s
00:27:07.775   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable
00:27:07.775   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:27:07.775  ************************************
00:27:07.775  END TEST nvmf_host_discovery
00:27:07.775  ************************************
00:27:07.775   19:11:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp
00:27:07.775   19:11:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:27:07.775   19:11:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:27:07.775   19:11:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:27:07.775  ************************************
00:27:07.775  START TEST nvmf_host_multipath_status
00:27:07.775  ************************************
00:27:07.775   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp
00:27:07.775  * Looking for test storage...
00:27:07.775  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:27:07.775    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:27:07.775     19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version
00:27:07.775     19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-:
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-:
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<'
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 ))
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:27:08.035     19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1
00:27:08.035     19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1
00:27:08.035     19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:27:08.035     19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1
00:27:08.035     19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2
00:27:08.035     19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2
00:27:08.035     19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:27:08.035     19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:27:08.035  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:08.035  		--rc genhtml_branch_coverage=1
00:27:08.035  		--rc genhtml_function_coverage=1
00:27:08.035  		--rc genhtml_legend=1
00:27:08.035  		--rc geninfo_all_blocks=1
00:27:08.035  		--rc geninfo_unexecuted_blocks=1
00:27:08.035  		
00:27:08.035  		'
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:27:08.035  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:08.035  		--rc genhtml_branch_coverage=1
00:27:08.035  		--rc genhtml_function_coverage=1
00:27:08.035  		--rc genhtml_legend=1
00:27:08.035  		--rc geninfo_all_blocks=1
00:27:08.035  		--rc geninfo_unexecuted_blocks=1
00:27:08.035  		
00:27:08.035  		'
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:27:08.035  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:08.035  		--rc genhtml_branch_coverage=1
00:27:08.035  		--rc genhtml_function_coverage=1
00:27:08.035  		--rc genhtml_legend=1
00:27:08.035  		--rc geninfo_all_blocks=1
00:27:08.035  		--rc geninfo_unexecuted_blocks=1
00:27:08.035  		
00:27:08.035  		'
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:27:08.035  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:08.035  		--rc genhtml_branch_coverage=1
00:27:08.035  		--rc genhtml_function_coverage=1
00:27:08.035  		--rc genhtml_legend=1
00:27:08.035  		--rc geninfo_all_blocks=1
00:27:08.035  		--rc geninfo_unexecuted_blocks=1
00:27:08.035  		
00:27:08.035  		'
00:27:08.035   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:27:08.035     19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:27:08.035     19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:27:08.035     19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob
00:27:08.035     19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:27:08.035     19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:27:08.035     19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:27:08.035      19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:08.035      19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:08.035      19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:08.035      19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH
00:27:08.035      19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:27:08.035  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:27:08.035    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0
00:27:08.035   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64
00:27:08.035   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:27:08.035   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:27:08.035   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:27:08.036    19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:27:08.036  Cannot find device "nvmf_init_br"
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:27:08.036  Cannot find device "nvmf_init_br2"
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:27:08.036  Cannot find device "nvmf_tgt_br"
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:27:08.036  Cannot find device "nvmf_tgt_br2"
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:27:08.036  Cannot find device "nvmf_init_br"
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:27:08.036  Cannot find device "nvmf_init_br2"
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:27:08.036  Cannot find device "nvmf_tgt_br"
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:27:08.036  Cannot find device "nvmf_tgt_br2"
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:27:08.036  Cannot find device "nvmf_br"
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:27:08.036  Cannot find device "nvmf_init_if"
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:27:08.036  Cannot find device "nvmf_init_if2"
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:27:08.036  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:27:08.036  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:27:08.036   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:27:08.296   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:27:08.296   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:27:08.296   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:27:08.296   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:27:08.296   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:27:08.296   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:27:08.296   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:27:08.296   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:27:08.296   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:27:08.296   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:27:08.296   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:27:08.296   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:27:08.296   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:27:08.296   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:27:08.296   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:27:08.296   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:27:08.296   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:27:08.296   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:27:08.296   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:27:08.296   19:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:27:08.296  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:27:08.296  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms
00:27:08.296  
00:27:08.296  --- 10.0.0.3 ping statistics ---
00:27:08.296  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:08.296  rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:27:08.296  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:27:08.296  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms
00:27:08.296  
00:27:08.296  --- 10.0.0.4 ping statistics ---
00:27:08.296  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:08.296  rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:27:08.296  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:27:08.296  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms
00:27:08.296  
00:27:08.296  --- 10.0.0.1 ping statistics ---
00:27:08.296  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:08.296  rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:27:08.296  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:27:08.296  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms
00:27:08.296  
00:27:08.296  --- 10.0.0.2 ping statistics ---
00:27:08.296  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:08.296  rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=111737
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 111737
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 111737 ']'
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100
00:27:08.296  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable
00:27:08.296   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x
00:27:08.555  [2024-12-13 19:11:40.147307] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:27:08.555  [2024-12-13 19:11:40.147381] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:27:08.555  [2024-12-13 19:11:40.292432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:27:08.555  [2024-12-13 19:11:40.327698] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:27:08.555  [2024-12-13 19:11:40.327781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:27:08.555  [2024-12-13 19:11:40.327792] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:27:08.555  [2024-12-13 19:11:40.327799] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:27:08.555  [2024-12-13 19:11:40.327806] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:27:08.555  [2024-12-13 19:11:40.329032] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:27:08.555  [2024-12-13 19:11:40.329043] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:27:08.814   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:27:08.814   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0
00:27:08.814   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:27:08.814   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable
00:27:08.814   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x
00:27:08.814   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:27:08.814   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=111737
00:27:08.814   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:27:09.072  [2024-12-13 19:11:40.783751] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:27:09.072   19:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0
00:27:09.331  Malloc0
00:27:09.331   19:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2
00:27:09.590   19:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:27:09.849   19:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:27:10.108  [2024-12-13 19:11:41.789867] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:27:10.108   19:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421
00:27:10.367  [2024-12-13 19:11:42.017882] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 ***
00:27:10.367   19:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90
00:27:10.367   19:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=111823
00:27:10.367   19:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:27:10.367   19:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 111823 /var/tmp/bdevperf.sock
00:27:10.367   19:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 111823 ']'
00:27:10.367   19:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:27:10.367   19:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100
00:27:10.367  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:27:10.367   19:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:27:10.367   19:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable
00:27:10.367   19:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x
00:27:11.303   19:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:27:11.303   19:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0
00:27:11.303   19:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1
00:27:11.561   19:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10
00:27:12.129  Nvme0n1
00:27:12.129   19:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10
00:27:12.388  Nvme0n1
00:27:12.388   19:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests
00:27:12.388   19:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2
00:27:14.291   19:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized
00:27:14.291   19:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized
00:27:14.551   19:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized
00:27:15.118   19:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1
00:27:16.055   19:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true
00:27:16.055   19:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:27:16.055    19:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:16.055    19:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:27:16.314   19:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:16.314   19:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false
00:27:16.314    19:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:16.314    19:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:27:16.574   19:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:27:16.574   19:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:27:16.574    19:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:16.574    19:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:27:16.833   19:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:16.833   19:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:27:16.833    19:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:27:16.833    19:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:17.092   19:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:17.092   19:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:27:17.092    19:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:27:17.092    19:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:17.350   19:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:17.350   19:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:27:17.350    19:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:17.350    19:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:27:17.609   19:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:17.609   19:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized
00:27:17.609   19:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized
00:27:17.609   19:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized
00:27:18.176   19:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1
00:27:19.111   19:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true
00:27:19.111   19:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false
00:27:19.111    19:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:19.111    19:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:27:19.370   19:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:27:19.370   19:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true
00:27:19.370    19:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:19.370    19:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:27:19.628   19:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:19.628   19:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:27:19.628    19:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:19.628    19:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:27:19.886   19:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:19.886   19:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:27:19.886    19:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:19.886    19:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:27:20.145   19:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:20.145   19:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:27:20.145    19:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:20.145    19:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:27:20.407   19:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:20.407   19:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:27:20.407    19:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:27:20.407    19:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:20.668   19:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:20.668   19:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized
00:27:20.668   19:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized
00:27:20.928   19:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized
00:27:21.187   19:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1
00:27:22.124   19:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true
00:27:22.124   19:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:27:22.124    19:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:22.124    19:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:27:22.383   19:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:22.383   19:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false
00:27:22.383    19:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:22.383    19:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:27:22.642   19:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:27:22.642   19:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:27:22.642    19:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:22.642    19:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:27:23.214   19:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:23.214   19:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:27:23.214    19:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:23.214    19:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:27:23.214   19:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:23.214   19:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:27:23.214    19:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:23.214    19:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:27:23.786   19:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:23.786   19:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:27:23.786    19:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:23.786    19:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:27:23.786   19:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:23.786   19:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible
00:27:23.786   19:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized
00:27:24.045   19:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible
00:27:24.612   19:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1
00:27:25.548   19:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false
00:27:25.548   19:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:27:25.549    19:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:25.549    19:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:27:25.807   19:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:25.807   19:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false
00:27:25.807    19:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:25.807    19:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:27:26.066   19:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:27:26.066   19:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:27:26.066    19:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:26.066    19:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:27:26.325   19:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:26.325   19:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:27:26.325    19:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:27:26.325    19:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:26.583   19:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:26.583   19:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:27:26.583    19:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:26.583    19:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:27:26.842   19:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:26.842   19:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false
00:27:26.842    19:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:26.842    19:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:27:27.100   19:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:27:27.100   19:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible
00:27:27.100   19:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible
00:27:27.359   19:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible
00:27:27.619   19:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1
00:27:28.555   19:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false
00:27:28.555   19:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false
00:27:28.555    19:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:28.555    19:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:27:28.813   19:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:27:28.813   19:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false
00:27:28.813    19:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:28.813    19:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:27:29.071   19:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:27:29.071   19:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:27:29.071    19:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:27:29.071    19:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:29.330   19:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:29.330   19:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:27:29.330    19:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:29.330    19:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:27:29.588   19:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:29.588   19:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false
00:27:29.588    19:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:29.588    19:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:27:29.847   19:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:27:29.847   19:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false
00:27:29.847    19:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:27:29.848    19:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:30.106   19:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:27:30.106   19:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized
00:27:30.106   19:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible
00:27:30.365   19:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized
00:27:30.933   19:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1
00:27:31.868   19:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true
00:27:31.868   19:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false
00:27:31.868    19:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:31.868    19:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:27:32.127   19:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:27:32.127   19:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true
00:27:32.127    19:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:32.127    19:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:27:32.386   19:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:32.386   19:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:27:32.386    19:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:32.386    19:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:27:32.644   19:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:32.644   19:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:27:32.644    19:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:32.644    19:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:27:32.903   19:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:32.903   19:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false
00:27:32.903    19:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:32.903    19:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:27:33.162   19:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:27:33.162   19:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:27:33.162    19:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:33.162    19:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:27:33.420   19:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:33.420   19:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active
00:27:33.986   19:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized
00:27:33.987   19:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized
00:27:33.987   19:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized
00:27:34.245   19:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1
00:27:35.180   19:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true
00:27:35.180   19:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:27:35.180    19:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:35.181    19:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:27:35.747   19:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:35.747   19:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true
00:27:35.747    19:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:35.747    19:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:27:36.006   19:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:36.006   19:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:27:36.006    19:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:36.006    19:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:27:36.264   19:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:36.264   19:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:27:36.264    19:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:36.264    19:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:27:36.523   19:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:36.523   19:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:27:36.523    19:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:36.523    19:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:27:36.781   19:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:36.781   19:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:27:36.781    19:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:36.781    19:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:27:37.044   19:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:37.044   19:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized
00:27:37.044   19:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized
00:27:37.353   19:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized
00:27:37.618   19:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1
00:27:38.554   19:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true
00:27:38.554   19:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false
00:27:38.554    19:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:38.554    19:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:27:38.813   19:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:27:38.813   19:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true
00:27:38.813    19:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:38.813    19:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:27:39.072   19:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:39.072   19:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:27:39.072    19:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:27:39.072    19:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:39.330   19:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:39.330   19:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:27:39.330    19:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:39.330    19:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:27:39.589   19:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:39.589   19:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:27:39.589    19:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:39.589    19:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:27:40.155   19:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:40.155   19:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:27:40.155    19:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:40.155    19:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:27:40.155   19:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:40.155   19:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized
00:27:40.155   19:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized
00:27:40.414   19:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized
00:27:40.673   19:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1
00:27:42.050   19:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true
00:27:42.050   19:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:27:42.050    19:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:27:42.050    19:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:42.050   19:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:42.050   19:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true
00:27:42.050    19:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:42.050    19:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:27:42.308   19:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:42.308   19:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:27:42.308    19:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:42.308    19:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:27:42.566   19:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:42.567   19:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:27:42.567    19:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:42.567    19:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:27:42.825   19:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:42.825   19:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:27:42.825    19:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:27:42.825    19:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:43.083   19:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:43.083   19:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:27:43.083    19:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:43.083    19:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:27:43.651   19:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:43.651   19:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible
00:27:43.651   19:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized
00:27:43.909   19:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible
00:27:43.909   19:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1
00:27:45.286   19:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false
00:27:45.286   19:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:27:45.286    19:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:45.286    19:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:27:45.286   19:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:45.286   19:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false
00:27:45.286    19:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:45.286    19:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:27:45.544   19:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:27:45.544   19:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:27:45.544    19:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:45.544    19:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:27:45.803   19:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:45.803   19:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:27:45.803    19:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:45.803    19:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:27:46.061   19:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:46.061   19:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:27:46.061    19:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:46.061    19:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:27:46.628   19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:27:46.628   19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false
00:27:46.628    19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:27:46.628    19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:27:46.628   19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:27:46.628   19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 111823
00:27:46.628   19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 111823 ']'
00:27:46.628   19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 111823
00:27:46.628    19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname
00:27:46.628   19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:27:46.628    19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111823
00:27:46.898  killing process with pid 111823
00:27:46.898   19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:27:46.898   19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:27:46.898   19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111823'
00:27:46.898   19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 111823
00:27:46.898   19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 111823
00:27:46.898  {
00:27:46.898    "results": [
00:27:46.898      {
00:27:46.898        "job": "Nvme0n1",
00:27:46.898        "core_mask": "0x4",
00:27:46.898        "workload": "verify",
00:27:46.898        "status": "terminated",
00:27:46.898        "verify_range": {
00:27:46.898          "start": 0,
00:27:46.898          "length": 16384
00:27:46.898        },
00:27:46.898        "queue_depth": 128,
00:27:46.898        "io_size": 4096,
00:27:46.898        "runtime": 34.323638,
00:27:46.898        "iops": 9141.25128577571,
00:27:46.898        "mibps": 35.70801283506137,
00:27:46.898        "io_failed": 0,
00:27:46.898        "io_timeout": 0,
00:27:46.898        "avg_latency_us": 13973.27754972734,
00:27:46.898        "min_latency_us": 322.0945454545455,
00:27:46.898        "max_latency_us": 4026531.84
00:27:46.898      }
00:27:46.898    ],
00:27:46.898    "core_count": 1
00:27:46.898  }
00:27:46.898   19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 111823
00:27:46.898   19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt
00:27:46.898  [2024-12-13 19:11:42.105334] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:27:46.898  [2024-12-13 19:11:42.105443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111823 ]
00:27:46.898  [2024-12-13 19:11:42.256316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:46.898  [2024-12-13 19:11:42.295030] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:27:46.898  Running I/O for 90 seconds...
00:27:46.898      10353.00 IOPS,    40.44 MiB/s
[2024-12-13T19:12:18.722Z]     10547.00 IOPS,    41.20 MiB/s
[2024-12-13T19:12:18.722Z]     10467.67 IOPS,    40.89 MiB/s
[2024-12-13T19:12:18.722Z]     10450.50 IOPS,    40.82 MiB/s
[2024-12-13T19:12:18.722Z]     10401.40 IOPS,    40.63 MiB/s
[2024-12-13T19:12:18.722Z]     10355.50 IOPS,    40.45 MiB/s
[2024-12-13T19:12:18.722Z]     10377.14 IOPS,    40.54 MiB/s
[2024-12-13T19:12:18.722Z]     10358.38 IOPS,    40.46 MiB/s
[2024-12-13T19:12:18.722Z]     10361.56 IOPS,    40.47 MiB/s
[2024-12-13T19:12:18.722Z]     10369.40 IOPS,    40.51 MiB/s
[2024-12-13T19:12:18.722Z]     10343.27 IOPS,    40.40 MiB/s
[2024-12-13T19:12:18.722Z]     10282.08 IOPS,    40.16 MiB/s
[2024-12-13T19:12:18.722Z]     10278.08 IOPS,    40.15 MiB/s
[2024-12-13T19:12:18.722Z]     10235.50 IOPS,    39.98 MiB/s
[2024-12-13T19:12:18.722Z] [2024-12-13 19:11:59.070153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.898  [2024-12-13 19:11:59.070274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:27:46.898  [2024-12-13 19:11:59.070336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.898  [2024-12-13 19:11:59.070360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:27:46.898  [2024-12-13 19:11:59.070384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.898  [2024-12-13 19:11:59.070401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:27:46.898  [2024-12-13 19:11:59.070424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.898  [2024-12-13 19:11:59.070440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:27:46.898  [2024-12-13 19:11:59.070462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.898  [2024-12-13 19:11:59.070477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:27:46.898  [2024-12-13 19:11:59.070499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.898  [2024-12-13 19:11:59.070514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:27:46.898  [2024-12-13 19:11:59.070536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.898  [2024-12-13 19:11:59.070551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:27:46.898  [2024-12-13 19:11:59.070588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.898  [2024-12-13 19:11:59.070633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:27:46.898  [2024-12-13 19:11:59.072183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.898  [2024-12-13 19:11:59.072207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:27:46.898  [2024-12-13 19:11:59.072305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.898  [2024-12-13 19:11:59.072326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:27:46.898  [2024-12-13 19:11:59.072352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.898  [2024-12-13 19:11:59.072371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:27:46.898  [2024-12-13 19:11:59.072396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.898  [2024-12-13 19:11:59.072412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:27:46.898  [2024-12-13 19:11:59.072436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:40544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.898  [2024-12-13 19:11:59.072452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:27:46.898  [2024-12-13 19:11:59.072476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.898  [2024-12-13 19:11:59.072492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:27:46.898  [2024-12-13 19:11:59.072517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.898  [2024-12-13 19:11:59.072532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:46.898  [2024-12-13 19:11:59.072557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.898  [2024-12-13 19:11:59.072573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:27:46.898  [2024-12-13 19:11:59.072746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.898  [2024-12-13 19:11:59.072771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:27:46.898  [2024-12-13 19:11:59.072800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.898  [2024-12-13 19:11:59.072817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:27:46.898  [2024-12-13 19:11:59.072841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.898  [2024-12-13 19:11:59.072855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:27:46.898  [2024-12-13 19:11:59.072880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.898  [2024-12-13 19:11:59.072895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:27:46.898  [2024-12-13 19:11:59.072920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.898  [2024-12-13 19:11:59.072934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:27:46.898  [2024-12-13 19:11:59.072958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.898  [2024-12-13 19:11:59.072991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.073017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.073033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.073057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.073072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.073096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.073111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.073135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.073150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.073174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.073189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.073213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.073244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.073286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.073319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.073346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.073363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.073405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.073423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.073449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.073465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.073491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.073507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.073533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:40712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.073558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.073586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.073614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.073640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.073656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.073681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.073697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.073756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.073774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.073801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.073817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.073842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.073859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.073884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:40768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.073900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.073926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.073942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.073968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.073984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.074010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.074026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001d p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.074066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.074081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.074105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.074121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.074159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:40816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.074176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.074201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.074216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.074265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.074297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.074325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.074342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.074369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:40848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.074386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.074413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.074429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.074456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.074472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.074498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.074514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.074548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.074564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.074605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.074620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.074645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.074661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.074701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.074724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.074749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.074772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.074798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.074813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.074837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.899  [2024-12-13 19:11:59.074852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:27:46.899  [2024-12-13 19:11:59.074876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.900  [2024-12-13 19:11:59.074892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:11:59.074922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:40944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.900  [2024-12-13 19:11:59.074938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:11:59.074962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.900  [2024-12-13 19:11:59.074977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:11:59.075001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.900  [2024-12-13 19:11:59.075017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:11:59.075040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.900  [2024-12-13 19:11:59.075055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:11:59.075079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.900  [2024-12-13 19:11:59.075094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:11:59.075118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.900  [2024-12-13 19:11:59.075134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:11:59.075158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.900  [2024-12-13 19:11:59.075173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:11:59.075206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.900  [2024-12-13 19:11:59.075222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:11:59.075291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.900  [2024-12-13 19:11:59.075318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:11:59.075346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.900  [2024-12-13 19:11:59.075363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:11:59.075389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.900  [2024-12-13 19:11:59.075405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:11:59.075431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.900  [2024-12-13 19:11:59.075453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:11:59.075480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.900  [2024-12-13 19:11:59.075496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:11:59.075522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.900  [2024-12-13 19:11:59.075538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:11:59.075565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.900  [2024-12-13 19:11:59.075581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:11:59.075621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.900  [2024-12-13 19:11:59.075637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:11:59.075667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.900  [2024-12-13 19:11:59.075683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:11:59.075714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.900  [2024-12-13 19:11:59.075729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:11:59.075770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:40424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.900  [2024-12-13 19:11:59.075785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:11:59.075810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.900  [2024-12-13 19:11:59.075825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:11:59.076057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:40440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.900  [2024-12-13 19:11:59.076094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:27:46.900      10166.47 IOPS,    39.71 MiB/s
[2024-12-13T19:12:18.724Z]      9531.06 IOPS,    37.23 MiB/s
[2024-12-13T19:12:18.724Z]      8970.41 IOPS,    35.04 MiB/s
[2024-12-13T19:12:18.724Z]      8472.06 IOPS,    33.09 MiB/s
[2024-12-13T19:12:18.724Z]      8072.16 IOPS,    31.53 MiB/s
[2024-12-13T19:12:18.724Z]      8173.10 IOPS,    31.93 MiB/s
[2024-12-13T19:12:18.724Z]      8279.10 IOPS,    32.34 MiB/s
[2024-12-13T19:12:18.724Z]      8364.27 IOPS,    32.67 MiB/s
[2024-12-13T19:12:18.724Z]      8497.96 IOPS,    33.20 MiB/s
[2024-12-13T19:12:18.724Z]      8614.50 IOPS,    33.65 MiB/s
[2024-12-13T19:12:18.724Z]      8717.08 IOPS,    34.05 MiB/s
[2024-12-13T19:12:18.724Z]      8777.58 IOPS,    34.29 MiB/s
[2024-12-13T19:12:18.724Z]      8813.30 IOPS,    34.43 MiB/s
[2024-12-13T19:12:18.724Z]      8859.54 IOPS,    34.61 MiB/s
[2024-12-13T19:12:18.724Z]      8929.79 IOPS,    34.88 MiB/s
[2024-12-13T19:12:18.724Z]      8998.90 IOPS,    35.15 MiB/s
[2024-12-13T19:12:18.724Z]      9053.74 IOPS,    35.37 MiB/s
[2024-12-13T19:12:18.724Z] [2024-12-13 19:12:15.695536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.900  [2024-12-13 19:12:15.695637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:12:15.695672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.900  [2024-12-13 19:12:15.695691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:12:15.695712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.900  [2024-12-13 19:12:15.695727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:12:15.695759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.900  [2024-12-13 19:12:15.695790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:12:15.695811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.900  [2024-12-13 19:12:15.695827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:12:15.695866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.900  [2024-12-13 19:12:15.695883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:12:15.695905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.900  [2024-12-13 19:12:15.695922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:12:15.695944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.900  [2024-12-13 19:12:15.695983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:12:15.696004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.900  [2024-12-13 19:12:15.696032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:12:15.696053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.900  [2024-12-13 19:12:15.696069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:12:15.696163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.900  [2024-12-13 19:12:15.696210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:12:15.696247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.900  [2024-12-13 19:12:15.696262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:12:15.696282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.900  [2024-12-13 19:12:15.696296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:12:15.696316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.900  [2024-12-13 19:12:15.696330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:12:15.696350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.900  [2024-12-13 19:12:15.696396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:27:46.900  [2024-12-13 19:12:15.696430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.901  [2024-12-13 19:12:15.696444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.696480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.901  [2024-12-13 19:12:15.696495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.696527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.901  [2024-12-13 19:12:15.696542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.696577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.901  [2024-12-13 19:12:15.696607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.696641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.901  [2024-12-13 19:12:15.696658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.696693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.901  [2024-12-13 19:12:15.696707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.696727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.901  [2024-12-13 19:12:15.696742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.696762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.901  [2024-12-13 19:12:15.696820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.696841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.901  [2024-12-13 19:12:15.696856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.696876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.901  [2024-12-13 19:12:15.696891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.696912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.901  [2024-12-13 19:12:15.696926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.696947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.901  [2024-12-13 19:12:15.696962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.697011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.901  [2024-12-13 19:12:15.697026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.697046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.901  [2024-12-13 19:12:15.697060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.697080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.901  [2024-12-13 19:12:15.697094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.697114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.901  [2024-12-13 19:12:15.697128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.697147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.901  [2024-12-13 19:12:15.697161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.697196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.901  [2024-12-13 19:12:15.697225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.698781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.901  [2024-12-13 19:12:15.698826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.698853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.901  [2024-12-13 19:12:15.698883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.698939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.901  [2024-12-13 19:12:15.698972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.698993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.901  [2024-12-13 19:12:15.699008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.699029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.901  [2024-12-13 19:12:15.699060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.699110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.901  [2024-12-13 19:12:15.699125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.699144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.901  [2024-12-13 19:12:15.699159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.699195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.901  [2024-12-13 19:12:15.699210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.699230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.901  [2024-12-13 19:12:15.699245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.699264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.901  [2024-12-13 19:12:15.699279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.699315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.901  [2024-12-13 19:12:15.699331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.699353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.901  [2024-12-13 19:12:15.699368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.699405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.901  [2024-12-13 19:12:15.699420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.699442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.901  [2024-12-13 19:12:15.699485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.699536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.901  [2024-12-13 19:12:15.699568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.699619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.901  [2024-12-13 19:12:15.699637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.699684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.901  [2024-12-13 19:12:15.699700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.699720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.901  [2024-12-13 19:12:15.699750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.699801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.901  [2024-12-13 19:12:15.699816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.699836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.901  [2024-12-13 19:12:15.699850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.699870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.901  [2024-12-13 19:12:15.699885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.699905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.901  [2024-12-13 19:12:15.699920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:27:46.901  [2024-12-13 19:12:15.699957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.902  [2024-12-13 19:12:15.699972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.699992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.902  [2024-12-13 19:12:15.700007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.700029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.902  [2024-12-13 19:12:15.700044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.700065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.902  [2024-12-13 19:12:15.700079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.700124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.902  [2024-12-13 19:12:15.700140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.700175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.902  [2024-12-13 19:12:15.700207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.700228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.902  [2024-12-13 19:12:15.700242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.700263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.902  [2024-12-13 19:12:15.700278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.700298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.902  [2024-12-13 19:12:15.700313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.700362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.902  [2024-12-13 19:12:15.700388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.701534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.902  [2024-12-13 19:12:15.701561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.701587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.902  [2024-12-13 19:12:15.701603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.701623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.902  [2024-12-13 19:12:15.701637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.701657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.902  [2024-12-13 19:12:15.701671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.701690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.902  [2024-12-13 19:12:15.701713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.701754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.902  [2024-12-13 19:12:15.701786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.701819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.902  [2024-12-13 19:12:15.701854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.701878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.902  [2024-12-13 19:12:15.701895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.701916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.902  [2024-12-13 19:12:15.701932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.701969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.902  [2024-12-13 19:12:15.701985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.702018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.902  [2024-12-13 19:12:15.702034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.702070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.902  [2024-12-13 19:12:15.702085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.702105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.902  [2024-12-13 19:12:15.702120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.702156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.902  [2024-12-13 19:12:15.702171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.702216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.902  [2024-12-13 19:12:15.702248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.702269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.902  [2024-12-13 19:12:15.702284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.702310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.902  [2024-12-13 19:12:15.702326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.702391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.902  [2024-12-13 19:12:15.702441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.702978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.902  [2024-12-13 19:12:15.703015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.703042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.902  [2024-12-13 19:12:15.703059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.703105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.902  [2024-12-13 19:12:15.703119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.703154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.902  [2024-12-13 19:12:15.703168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.703188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.902  [2024-12-13 19:12:15.703220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0
00:27:46.902  [2024-12-13 19:12:15.703250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.903  [2024-12-13 19:12:15.703264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.703300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.903  [2024-12-13 19:12:15.703326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.703360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.903  [2024-12-13 19:12:15.703375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.703394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.903  [2024-12-13 19:12:15.703408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.703445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.903  [2024-12-13 19:12:15.703462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.703489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.903  [2024-12-13 19:12:15.703503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.703522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.903  [2024-12-13 19:12:15.703552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.703573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.903  [2024-12-13 19:12:15.703598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.703660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.903  [2024-12-13 19:12:15.703691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.703719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.903  [2024-12-13 19:12:15.703736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.703772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.903  [2024-12-13 19:12:15.703788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.703832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.903  [2024-12-13 19:12:15.703848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.703896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.903  [2024-12-13 19:12:15.703923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.703945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.903  [2024-12-13 19:12:15.703961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.703997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.903  [2024-12-13 19:12:15.704027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.704062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.903  [2024-12-13 19:12:15.704077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.704097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.903  [2024-12-13 19:12:15.704111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.704131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.903  [2024-12-13 19:12:15.704146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.704197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.903  [2024-12-13 19:12:15.704211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.704231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.903  [2024-12-13 19:12:15.704254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.704283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.903  [2024-12-13 19:12:15.704298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.704318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.903  [2024-12-13 19:12:15.704333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.704353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.903  [2024-12-13 19:12:15.704368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.705936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.903  [2024-12-13 19:12:15.705967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.705995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.903  [2024-12-13 19:12:15.706013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.706049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.903  [2024-12-13 19:12:15.706066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.706088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.903  [2024-12-13 19:12:15.706119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.706158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.903  [2024-12-13 19:12:15.706174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.706211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.903  [2024-12-13 19:12:15.706242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.706265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.903  [2024-12-13 19:12:15.706281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.706360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.903  [2024-12-13 19:12:15.706394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.706415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.903  [2024-12-13 19:12:15.706431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.706464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.903  [2024-12-13 19:12:15.706481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.706502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.903  [2024-12-13 19:12:15.706517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.706553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.903  [2024-12-13 19:12:15.706568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.706587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.903  [2024-12-13 19:12:15.706602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.706622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.903  [2024-12-13 19:12:15.706637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.706657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.903  [2024-12-13 19:12:15.706672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.706692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.903  [2024-12-13 19:12:15.706720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.706771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.903  [2024-12-13 19:12:15.706800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:27:46.903  [2024-12-13 19:12:15.706849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.904  [2024-12-13 19:12:15.706864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.706890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.904  [2024-12-13 19:12:15.706905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.706925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.904  [2024-12-13 19:12:15.706940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.706971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.904  [2024-12-13 19:12:15.706985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.707005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.904  [2024-12-13 19:12:15.707063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.707086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.904  [2024-12-13 19:12:15.707104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.707125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.904  [2024-12-13 19:12:15.707152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.707172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.904  [2024-12-13 19:12:15.707188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.707222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.904  [2024-12-13 19:12:15.707251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.707270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.904  [2024-12-13 19:12:15.707284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.707304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.904  [2024-12-13 19:12:15.707318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.707337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.904  [2024-12-13 19:12:15.707368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.707388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.904  [2024-12-13 19:12:15.707403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.709691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.904  [2024-12-13 19:12:15.709752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.709780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.904  [2024-12-13 19:12:15.709798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.709819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.904  [2024-12-13 19:12:15.709835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.709856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.904  [2024-12-13 19:12:15.709887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.709927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.904  [2024-12-13 19:12:15.709944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.709980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.904  [2024-12-13 19:12:15.709995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.710032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.904  [2024-12-13 19:12:15.710047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.710068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.904  [2024-12-13 19:12:15.710083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.710121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.904  [2024-12-13 19:12:15.710147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.710185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.904  [2024-12-13 19:12:15.710201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.710245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.904  [2024-12-13 19:12:15.710260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.710297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.904  [2024-12-13 19:12:15.710312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.710345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.904  [2024-12-13 19:12:15.710363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.710414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.904  [2024-12-13 19:12:15.710429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.710464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.904  [2024-12-13 19:12:15.710479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.710499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.904  [2024-12-13 19:12:15.710529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.710560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.904  [2024-12-13 19:12:15.710576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.710597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.904  [2024-12-13 19:12:15.710628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.710650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.904  [2024-12-13 19:12:15.710665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.710687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.904  [2024-12-13 19:12:15.710702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.710724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.904  [2024-12-13 19:12:15.710740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.710773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.904  [2024-12-13 19:12:15.710804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.710836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.904  [2024-12-13 19:12:15.710852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.710874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.904  [2024-12-13 19:12:15.710889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.710911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.904  [2024-12-13 19:12:15.710929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.710951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.904  [2024-12-13 19:12:15.710967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.710989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.904  [2024-12-13 19:12:15.711004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:27:46.904  [2024-12-13 19:12:15.711027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.905  [2024-12-13 19:12:15.711043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.711781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.905  [2024-12-13 19:12:15.711809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.711853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.905  [2024-12-13 19:12:15.711870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.711924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.905  [2024-12-13 19:12:15.711940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.711961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.905  [2024-12-13 19:12:15.711977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.711999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.905  [2024-12-13 19:12:15.712015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.712036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.905  [2024-12-13 19:12:15.712052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.712081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.905  [2024-12-13 19:12:15.712097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.712119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.905  [2024-12-13 19:12:15.712135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.712157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.905  [2024-12-13 19:12:15.712173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.712209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.905  [2024-12-13 19:12:15.712241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.712303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.905  [2024-12-13 19:12:15.712318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.714801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.905  [2024-12-13 19:12:15.714829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.714855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.905  [2024-12-13 19:12:15.714885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.714908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.905  [2024-12-13 19:12:15.714924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.714959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.905  [2024-12-13 19:12:15.714990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.715011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.905  [2024-12-13 19:12:15.715026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.715047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.905  [2024-12-13 19:12:15.715062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.715099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.905  [2024-12-13 19:12:15.715114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.715136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.905  [2024-12-13 19:12:15.715168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.715189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.905  [2024-12-13 19:12:15.715205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.715226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.905  [2024-12-13 19:12:15.715262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.715313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.905  [2024-12-13 19:12:15.715329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.715365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.905  [2024-12-13 19:12:15.715381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.715403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.905  [2024-12-13 19:12:15.715419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.715441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.905  [2024-12-13 19:12:15.715481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.715508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.905  [2024-12-13 19:12:15.715548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.715570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.905  [2024-12-13 19:12:15.715587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.715609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.905  [2024-12-13 19:12:15.715624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.715675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.905  [2024-12-13 19:12:15.715706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.715726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.905  [2024-12-13 19:12:15.715770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.715790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.905  [2024-12-13 19:12:15.715805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.715836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.905  [2024-12-13 19:12:15.715851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.715871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.905  [2024-12-13 19:12:15.715896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.715916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.905  [2024-12-13 19:12:15.715931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.715952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.905  [2024-12-13 19:12:15.715968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.718886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.905  [2024-12-13 19:12:15.718917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.718962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.905  [2024-12-13 19:12:15.718979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.719039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.905  [2024-12-13 19:12:15.719056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.719089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.905  [2024-12-13 19:12:15.719119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:27:46.905  [2024-12-13 19:12:15.719141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.719174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.719225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.719243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.719275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.719306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.719328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.719342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.719364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.719378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.719399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.719414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.719435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.719450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.719483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.719501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.719523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.719539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.719602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.719624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.719665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.719681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.719702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.719732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.719751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.719765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.719785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.719800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.719819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.719833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.719885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.719901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.719921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.719936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.719957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.719984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.720005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.906  [2024-12-13 19:12:15.720036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.720057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.906  [2024-12-13 19:12:15.720072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.720093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.906  [2024-12-13 19:12:15.720108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.720129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.720143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.720164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.720187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.720224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.720239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.720259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.720273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.720293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.906  [2024-12-13 19:12:15.720308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.720329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.720344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.720376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.906  [2024-12-13 19:12:15.720393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.720430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.720445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.720465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.720496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.720518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.906  [2024-12-13 19:12:15.720533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.720554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.906  [2024-12-13 19:12:15.720570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.720591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.720606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.720650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.906  [2024-12-13 19:12:15.720677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:27:46.906  [2024-12-13 19:12:15.720698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.907  [2024-12-13 19:12:15.720734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.720771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.907  [2024-12-13 19:12:15.720787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.720808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.907  [2024-12-13 19:12:15.720823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.720859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.907  [2024-12-13 19:12:15.720873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.723061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.907  [2024-12-13 19:12:15.723092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.723119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.907  [2024-12-13 19:12:15.723136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.723157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.907  [2024-12-13 19:12:15.723188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.723209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.907  [2024-12-13 19:12:15.723224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.723260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.723306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.723328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.723343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.723365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.723429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.723453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.723469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.723505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.723520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.723558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.723593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.723629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.723682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.723702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.723718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.723738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.723753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.723772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.723786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.723806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.723821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.723840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.723855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.723874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.907  [2024-12-13 19:12:15.723889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.723908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.907  [2024-12-13 19:12:15.723923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.723942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.907  [2024-12-13 19:12:15.723956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.723987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.724017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.724037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.724052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.724095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.724112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.724133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.724149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.724185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.724200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.724220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.724234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.724282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.724296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.724317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.724343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.724364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.724379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.724414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.907  [2024-12-13 19:12:15.724445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.724480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.724495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.724514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.724529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.724548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.907  [2024-12-13 19:12:15.724562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.724598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.907  [2024-12-13 19:12:15.724613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.725500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.725570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.725614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.907  [2024-12-13 19:12:15.725642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.725662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.907  [2024-12-13 19:12:15.725677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:27:46.907  [2024-12-13 19:12:15.725697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.908  [2024-12-13 19:12:15.725738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.725778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.908  [2024-12-13 19:12:15.725795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.725817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.908  [2024-12-13 19:12:15.725833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.725855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.908  [2024-12-13 19:12:15.725871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.725893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.908  [2024-12-13 19:12:15.725909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.725930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.908  [2024-12-13 19:12:15.725946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.725968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.908  [2024-12-13 19:12:15.725984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.726021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.908  [2024-12-13 19:12:15.726037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.726058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.908  [2024-12-13 19:12:15.726074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.726096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.908  [2024-12-13 19:12:15.726121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.726144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.908  [2024-12-13 19:12:15.726160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.726192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.908  [2024-12-13 19:12:15.726233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.726252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.908  [2024-12-13 19:12:15.726267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.726301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.908  [2024-12-13 19:12:15.726327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.726382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.908  [2024-12-13 19:12:15.726397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.726416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.908  [2024-12-13 19:12:15.726431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.726464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.908  [2024-12-13 19:12:15.726478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.726498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.908  [2024-12-13 19:12:15.726513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.728189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.908  [2024-12-13 19:12:15.728219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.728276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.908  [2024-12-13 19:12:15.728308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.728344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.908  [2024-12-13 19:12:15.728360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.728380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.908  [2024-12-13 19:12:15.728407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.728447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.908  [2024-12-13 19:12:15.728465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.728487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.908  [2024-12-13 19:12:15.728503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.728524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.908  [2024-12-13 19:12:15.728556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.728577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.908  [2024-12-13 19:12:15.728597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.728632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.908  [2024-12-13 19:12:15.728664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.728685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.908  [2024-12-13 19:12:15.728700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.728736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.908  [2024-12-13 19:12:15.728751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.728772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.908  [2024-12-13 19:12:15.728803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.728825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.908  [2024-12-13 19:12:15.728840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.728862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.908  [2024-12-13 19:12:15.728877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.728898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.908  [2024-12-13 19:12:15.728913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.728934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.908  [2024-12-13 19:12:15.728966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.729012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.908  [2024-12-13 19:12:15.729029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.729050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.908  [2024-12-13 19:12:15.729066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.729088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.908  [2024-12-13 19:12:15.729104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.729139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.908  [2024-12-13 19:12:15.729155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.729188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.908  [2024-12-13 19:12:15.729203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:27:46.908  [2024-12-13 19:12:15.729224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.908  [2024-12-13 19:12:15.729255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.729277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.729294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.729316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.729333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.729355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.729383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.729406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.729453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.729475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.729491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.729513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.729528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.729589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.729621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.729653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.729669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.729689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.729729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.729766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.729783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.729805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.729821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.729843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.729858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.729881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.909  [2024-12-13 19:12:15.729897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.729919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.909  [2024-12-13 19:12:15.729935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.729957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.729973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.731607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.731638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.731679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.731697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.731721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.731738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.731760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.731790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.731814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.731831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.731853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.909  [2024-12-13 19:12:15.731869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.731891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.909  [2024-12-13 19:12:15.731906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.731929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.909  [2024-12-13 19:12:15.731944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.731966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.909  [2024-12-13 19:12:15.731982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.732004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.909  [2024-12-13 19:12:15.732019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.732041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.909  [2024-12-13 19:12:15.732057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.732094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.909  [2024-12-13 19:12:15.732111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.732685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.909  [2024-12-13 19:12:15.732713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.732739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.732759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.732780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.732795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.732816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.732843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.732882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.732913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.732948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.732964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.733000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.909  [2024-12-13 19:12:15.733016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.733036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.909  [2024-12-13 19:12:15.733051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.733103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.909  [2024-12-13 19:12:15.733118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.733139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.909  [2024-12-13 19:12:15.733170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.733203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.909  [2024-12-13 19:12:15.733218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.733239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.909  [2024-12-13 19:12:15.733265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:27:46.909  [2024-12-13 19:12:15.733301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.909  [2024-12-13 19:12:15.733346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.733383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.733399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.733420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.733450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.733502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.733517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.733546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.733563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.733599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.733618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.733655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.733671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.733692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.733734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.733759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.910  [2024-12-13 19:12:15.733775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.733798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.733814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.733836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.733852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.733874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.910  [2024-12-13 19:12:15.733889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.733911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.910  [2024-12-13 19:12:15.733927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.733949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.733965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.733987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.734003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.734026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.734050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.736797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.736826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.736853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.736870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.736891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.736907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.736944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.736959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.736981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.736996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.737018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.910  [2024-12-13 19:12:15.737034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.737055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.910  [2024-12-13 19:12:15.737071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.737092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.910  [2024-12-13 19:12:15.737137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.737158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.910  [2024-12-13 19:12:15.737172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.737193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.737207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.737228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.737242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.737262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.737277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.737298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.737326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.737364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.737380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.737400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.910  [2024-12-13 19:12:15.737415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.737436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.910  [2024-12-13 19:12:15.737467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.737488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.910  [2024-12-13 19:12:15.737517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.737553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.737568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.737605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.737621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.737642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.737670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.737691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.737732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.737758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.737774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.737797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.910  [2024-12-13 19:12:15.737813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.737835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.737851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:27:46.910  [2024-12-13 19:12:15.737874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.910  [2024-12-13 19:12:15.737900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.739033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.911  [2024-12-13 19:12:15.739071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.739115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.911  [2024-12-13 19:12:15.739132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.739155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.911  [2024-12-13 19:12:15.739186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.739237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.739252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.739287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.739306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.739326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.739342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.739374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.739400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.739449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.739464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.739497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.739512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.739550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.739580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.740312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.740357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.740414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.740430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.740482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.740499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.740520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.740536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.740567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.740583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.740618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.740635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.740656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.740671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001c p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.740691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.740706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.740727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.740742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.740777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.740792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.740828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.740843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.740865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.740880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.740932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.740963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.740985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.741017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.741048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.911  [2024-12-13 19:12:15.741065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.741088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.911  [2024-12-13 19:12:15.741103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.741125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.911  [2024-12-13 19:12:15.741141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.741163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.741179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.741201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.741217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.741239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.911  [2024-12-13 19:12:15.741255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.741292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.911  [2024-12-13 19:12:15.741322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.741343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.911  [2024-12-13 19:12:15.741358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.742041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.911  [2024-12-13 19:12:15.742082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.742113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.911  [2024-12-13 19:12:15.742131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.742154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.911  [2024-12-13 19:12:15.742170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:27:46.911  [2024-12-13 19:12:15.742221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.911  [2024-12-13 19:12:15.742236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.742270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.912  [2024-12-13 19:12:15.742329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.742351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.912  [2024-12-13 19:12:15.742366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.742397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.912  [2024-12-13 19:12:15.742411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.742430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.912  [2024-12-13 19:12:15.742460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.742494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.912  [2024-12-13 19:12:15.742508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.742527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.912  [2024-12-13 19:12:15.742541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.742576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.912  [2024-12-13 19:12:15.742590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.742610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.912  [2024-12-13 19:12:15.742625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.742645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.912  [2024-12-13 19:12:15.742659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.742678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.912  [2024-12-13 19:12:15.742693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0039 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.742713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.912  [2024-12-13 19:12:15.742728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.745028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.912  [2024-12-13 19:12:15.745079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.745120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.912  [2024-12-13 19:12:15.745148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.745170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.912  [2024-12-13 19:12:15.745185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.745204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.912  [2024-12-13 19:12:15.745218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.745250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.912  [2024-12-13 19:12:15.745268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.745320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.912  [2024-12-13 19:12:15.745336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.745356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.912  [2024-12-13 19:12:15.745387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.745409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.912  [2024-12-13 19:12:15.745424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.745446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.912  [2024-12-13 19:12:15.745473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.745508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.912  [2024-12-13 19:12:15.745522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.745543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.912  [2024-12-13 19:12:15.745586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.745608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.912  [2024-12-13 19:12:15.745623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.745644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.912  [2024-12-13 19:12:15.745660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.745681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.912  [2024-12-13 19:12:15.745758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.745785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.912  [2024-12-13 19:12:15.745802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.745824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.912  [2024-12-13 19:12:15.745840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.745862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.912  [2024-12-13 19:12:15.745878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.745900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.912  [2024-12-13 19:12:15.745916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.745938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.912  [2024-12-13 19:12:15.745954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.745976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.912  [2024-12-13 19:12:15.745992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.746014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.912  [2024-12-13 19:12:15.746036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.746072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.912  [2024-12-13 19:12:15.746091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.746112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.912  [2024-12-13 19:12:15.746127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.746148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.912  [2024-12-13 19:12:15.746162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.746184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.912  [2024-12-13 19:12:15.746199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.748668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.912  [2024-12-13 19:12:15.748729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.748812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.912  [2024-12-13 19:12:15.748845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.748898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.912  [2024-12-13 19:12:15.748914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:27:46.912  [2024-12-13 19:12:15.748936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.913  [2024-12-13 19:12:15.748952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.748973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.913  [2024-12-13 19:12:15.749004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.749024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.913  [2024-12-13 19:12:15.749040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.749061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.913  [2024-12-13 19:12:15.749076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.749096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.913  [2024-12-13 19:12:15.749111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.749133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.749148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.749169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.749199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.749219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.749233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.749254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.749269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.749289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.749320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.749363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.749382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.749403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.749418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.749439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.749454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.749506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.749537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.749559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.749574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.749595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.749623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.749670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.749685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.749758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.749777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.749799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.749815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.749837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.749852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.749875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.749890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.749912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.913  [2024-12-13 19:12:15.749929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.749951] nvme_qpair.c: 243:nvme_io_qpair_prin 19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:27:46.913  t_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.913  [2024-12-13 19:12:15.749976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.750000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.913  [2024-12-13 19:12:15.750017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.750068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.913  [2024-12-13 19:12:15.750083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.750103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.750118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.750154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.750185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.750219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.750234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.750270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.750296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.750315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.750342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.750362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.750377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.750397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.750411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.750431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.913  [2024-12-13 19:12:15.750445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.750478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.913  [2024-12-13 19:12:15.750493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.750513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.913  [2024-12-13 19:12:15.750537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.750560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.913  [2024-12-13 19:12:15.750575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.750595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.913  [2024-12-13 19:12:15.750610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.750631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.913  [2024-12-13 19:12:15.750660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.750679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.913  [2024-12-13 19:12:15.750694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:27:46.913  [2024-12-13 19:12:15.752276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.913  [2024-12-13 19:12:15.752322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:27:46.914  [2024-12-13 19:12:15.752353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.914  [2024-12-13 19:12:15.752371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:27:46.914  [2024-12-13 19:12:15.752392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.914  [2024-12-13 19:12:15.752407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:46.914  [2024-12-13 19:12:15.752443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.914  [2024-12-13 19:12:15.752458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:27:46.914  [2024-12-13 19:12:15.752490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.914  [2024-12-13 19:12:15.752504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:27:46.914  [2024-12-13 19:12:15.752524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.914  [2024-12-13 19:12:15.752539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:27:46.914  [2024-12-13 19:12:15.752558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.914  [2024-12-13 19:12:15.752573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:27:46.914  [2024-12-13 19:12:15.752604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.914  [2024-12-13 19:12:15.752618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:27:46.914  [2024-12-13 19:12:15.752682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.914  [2024-12-13 19:12:15.752698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:27:46.914  [2024-12-13 19:12:15.752717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.914  [2024-12-13 19:12:15.752731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:27:46.914  [2024-12-13 19:12:15.752766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.914  [2024-12-13 19:12:15.752780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:27:46.914  [2024-12-13 19:12:15.752801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.914  [2024-12-13 19:12:15.752815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:27:46.914  [2024-12-13 19:12:15.752835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.914  [2024-12-13 19:12:15.752849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:27:46.914  [2024-12-13 19:12:15.752869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:46.914  [2024-12-13 19:12:15.752883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:27:46.914  [2024-12-13 19:12:15.752915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:27:46.914  [2024-12-13 19:12:15.752946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:27:46.914       9088.12 IOPS,    35.50 MiB/s
[2024-12-13T19:12:18.738Z]      9110.73 IOPS,    35.59 MiB/s
[2024-12-13T19:12:18.738Z]      9137.71 IOPS,    35.69 MiB/s
[2024-12-13T19:12:18.738Z] Received shutdown signal, test time was about 34.324354 seconds
00:27:46.914  
00:27:46.914                                                                                                  Latency(us)
00:27:46.914  
[2024-12-13T19:12:18.738Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:27:46.914  Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:27:46.914  	 Verification LBA range: start 0x0 length 0x4000
00:27:46.914  	 Nvme0n1             :      34.32    9141.25      35.71       0.00     0.00   13973.28     322.09 4026531.84
00:27:46.914  
[2024-12-13T19:12:18.738Z]  ===================================================================================================================
00:27:46.914  
[2024-12-13T19:12:18.738Z]  Total                       :               9141.25      35.71       0.00     0.00   13973.28     322.09 4026531.84
00:27:47.173   19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT
00:27:47.173   19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt
00:27:47.173   19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini
00:27:47.173   19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup
00:27:47.173   19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync
00:27:47.173   19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:27:47.173   19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e
00:27:47.173   19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20}
00:27:47.173   19:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:27:47.173  rmmod nvme_tcp
00:27:47.432  rmmod nvme_fabrics
00:27:47.432  rmmod nvme_keyring
00:27:47.432   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:27:47.432   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e
00:27:47.432   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0
00:27:47.432   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 111737 ']'
00:27:47.432   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 111737
00:27:47.432   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 111737 ']'
00:27:47.432   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 111737
00:27:47.432    19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname
00:27:47.432   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:27:47.432    19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111737
00:27:47.432  killing process with pid 111737
00:27:47.432   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:27:47.432   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:27:47.432   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111737'
00:27:47.432   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 111737
00:27:47.432   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 111737
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:27:47.691   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:27:47.691    19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:27:47.950   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0
00:27:47.950  
00:27:47.950  real	0m40.077s
00:27:47.950  user	2m10.341s
00:27:47.950  sys	0m10.500s
00:27:47.950   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable
00:27:47.950   19:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x
00:27:47.950  ************************************
00:27:47.950  END TEST nvmf_host_multipath_status
00:27:47.950  ************************************
00:27:47.950   19:12:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp
00:27:47.950   19:12:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:27:47.950   19:12:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:27:47.950   19:12:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:27:47.950  ************************************
00:27:47.950  START TEST nvmf_discovery_remove_ifc
00:27:47.950  ************************************
00:27:47.950   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp
00:27:47.950  * Looking for test storage...
00:27:47.950  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:27:47.950     19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version
00:27:47.950     19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-:
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-:
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<'
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 ))
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:27:47.950     19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1
00:27:47.950     19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1
00:27:47.950     19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:27:47.950     19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1
00:27:47.950     19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2
00:27:47.950     19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2
00:27:47.950     19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:27:47.950     19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:27:47.950  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:47.950  		--rc genhtml_branch_coverage=1
00:27:47.950  		--rc genhtml_function_coverage=1
00:27:47.950  		--rc genhtml_legend=1
00:27:47.950  		--rc geninfo_all_blocks=1
00:27:47.950  		--rc geninfo_unexecuted_blocks=1
00:27:47.950  		
00:27:47.950  		'
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:27:47.950  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:47.950  		--rc genhtml_branch_coverage=1
00:27:47.950  		--rc genhtml_function_coverage=1
00:27:47.950  		--rc genhtml_legend=1
00:27:47.950  		--rc geninfo_all_blocks=1
00:27:47.950  		--rc geninfo_unexecuted_blocks=1
00:27:47.950  		
00:27:47.950  		'
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:27:47.950  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:47.950  		--rc genhtml_branch_coverage=1
00:27:47.950  		--rc genhtml_function_coverage=1
00:27:47.950  		--rc genhtml_legend=1
00:27:47.950  		--rc geninfo_all_blocks=1
00:27:47.950  		--rc geninfo_unexecuted_blocks=1
00:27:47.950  		
00:27:47.950  		'
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:27:47.950  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:47.950  		--rc genhtml_branch_coverage=1
00:27:47.950  		--rc genhtml_function_coverage=1
00:27:47.950  		--rc genhtml_legend=1
00:27:47.950  		--rc geninfo_all_blocks=1
00:27:47.950  		--rc geninfo_unexecuted_blocks=1
00:27:47.950  		
00:27:47.950  		'
00:27:47.950   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:27:47.950     19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:27:47.950    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:27:48.210     19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:27:48.210    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:27:48.211    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:27:48.211    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:27:48.211    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:27:48.211    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:27:48.211    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:27:48.211    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:27:48.211     19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob
00:27:48.211     19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:27:48.211     19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:27:48.211     19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:27:48.211      19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:48.211      19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:48.211      19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:48.211      19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH
00:27:48.211      19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:48.211    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0
00:27:48.211    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:27:48.211    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:27:48.211    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:27:48.211    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:27:48.211    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:27:48.211    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:27:48.211  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:27:48.211    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:27:48.211    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:27:48.211    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']'
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:27:48.211    19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:27:48.211  Cannot find device "nvmf_init_br"
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:27:48.211  Cannot find device "nvmf_init_br2"
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:27:48.211  Cannot find device "nvmf_tgt_br"
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:27:48.211  Cannot find device "nvmf_tgt_br2"
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:27:48.211  Cannot find device "nvmf_init_br"
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:27:48.211  Cannot find device "nvmf_init_br2"
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:27:48.211  Cannot find device "nvmf_tgt_br"
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:27:48.211  Cannot find device "nvmf_tgt_br2"
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:27:48.211  Cannot find device "nvmf_br"
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:27:48.211  Cannot find device "nvmf_init_if"
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:27:48.211  Cannot find device "nvmf_init_if2"
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true
00:27:48.211   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:27:48.211  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:27:48.212   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true
00:27:48.212   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:27:48.212  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:27:48.212   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true
00:27:48.212   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:27:48.212   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:27:48.212   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:27:48.212   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:27:48.212   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:27:48.212   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:27:48.212   19:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:27:48.212   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:27:48.212   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:27:48.212   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:27:48.212   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:27:48.212   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:27:48.212   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:27:48.212   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:27:48.212   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:27:48.470   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:27:48.470   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:27:48.470   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:27:48.470   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:27:48.470   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:27:48.470   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:27:48.470   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:27:48.470   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:27:48.470   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:27:48.470   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:27:48.470   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:27:48.470   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:27:48.470   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:27:48.470   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:27:48.470   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:27:48.470   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:27:48.471  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:27:48.471  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms
00:27:48.471  
00:27:48.471  --- 10.0.0.3 ping statistics ---
00:27:48.471  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:48.471  rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:27:48.471  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:27:48.471  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms
00:27:48.471  
00:27:48.471  --- 10.0.0.4 ping statistics ---
00:27:48.471  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:48.471  rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:27:48.471  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:27:48.471  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms
00:27:48.471  
00:27:48.471  --- 10.0.0.1 ping statistics ---
00:27:48.471  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:48.471  rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:27:48.471  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:27:48.471  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms
00:27:48.471  
00:27:48.471  --- 10.0.0.2 ping statistics ---
00:27:48.471  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:48.471  rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=113181
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 113181
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 113181 ']'
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100
00:27:48.471  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable
00:27:48.471   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:27:48.471  [2024-12-13 19:12:20.241647] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:27:48.471  [2024-12-13 19:12:20.241739] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:27:48.730  [2024-12-13 19:12:20.392278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:48.730  [2024-12-13 19:12:20.431187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:27:48.730  [2024-12-13 19:12:20.431262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:27:48.730  [2024-12-13 19:12:20.431285] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:27:48.730  [2024-12-13 19:12:20.431296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:27:48.730  [2024-12-13 19:12:20.431306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:27:48.730  [2024-12-13 19:12:20.431790] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:27:48.988   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:27:48.988   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0
00:27:48.988   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:27:48.988   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable
00:27:48.988   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:27:48.988   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:27:48.988   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd
00:27:48.988   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:48.988   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:27:48.988  [2024-12-13 19:12:20.615105] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:27:48.988  [2024-12-13 19:12:20.623282] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 ***
00:27:48.988  null0
00:27:48.988  [2024-12-13 19:12:20.655180] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:27:48.988   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:48.988   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=113222
00:27:48.988   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme
00:27:48.988   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 113222 /tmp/host.sock
00:27:48.988   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 113222 ']'
00:27:48.988   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock
00:27:48.988   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100
00:27:48.988  Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...
00:27:48.988   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...'
00:27:48.988   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable
00:27:48.988   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:27:48.988  [2024-12-13 19:12:20.731478] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:27:48.988  [2024-12-13 19:12:20.731559] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113222 ]
00:27:49.247  [2024-12-13 19:12:20.881532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:49.247  [2024-12-13 19:12:20.921167] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:27:49.247   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:27:49.247   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0
00:27:49.247   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:27:49.247   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1
00:27:49.247   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:49.247   19:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:27:49.247   19:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:49.247   19:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init
00:27:49.247   19:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:49.247   19:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:27:49.505   19:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:49.505   19:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach
00:27:49.505   19:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:49.505   19:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:27:50.462  [2024-12-13 19:12:22.128048] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached
00:27:50.462  [2024-12-13 19:12:22.128080] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected
00:27:50.462  [2024-12-13 19:12:22.128103] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command
00:27:50.462  [2024-12-13 19:12:22.214150] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0
00:27:50.462  [2024-12-13 19:12:22.268558] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420
00:27:50.730  [2024-12-13 19:12:22.269430] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xca4720:1 started.
00:27:50.730  [2024-12-13 19:12:22.271167] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0
00:27:50.730  [2024-12-13 19:12:22.271258] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0
00:27:50.730  [2024-12-13 19:12:22.271288] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0
00:27:50.730  [2024-12-13 19:12:22.271309] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done
00:27:50.730  [2024-12-13 19:12:22.271331] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again
00:27:50.730   19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:50.730   19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1
00:27:50.730    19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:27:50.730    19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:27:50.730  [2024-12-13 19:12:22.276825] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xca4720 was disconnected and freed. delete nvme_qpair.
00:27:50.730    19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:50.730    19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:27:50.730    19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:27:50.730    19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:27:50.730    19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:27:50.730    19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:50.730   19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]]
00:27:50.730   19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if
00:27:50.730   19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down
00:27:50.730   19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev ''
00:27:50.730    19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:27:50.730    19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:27:50.730    19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:50.730    19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:27:50.730    19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:27:50.730    19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:27:50.730    19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:27:50.730    19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:50.730   19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:27:50.730   19:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:27:51.666    19:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:27:51.666    19:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:27:51.666    19:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:27:51.666    19:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:51.666    19:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:27:51.666    19:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:27:51.666    19:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:27:51.666    19:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:51.666   19:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:27:51.666   19:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:27:53.042    19:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:27:53.042    19:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:27:53.042    19:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:27:53.042    19:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:53.042    19:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:27:53.042    19:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:27:53.042    19:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:27:53.042    19:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:53.042   19:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:27:53.042   19:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:27:53.977    19:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:27:53.977    19:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:27:53.977    19:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:27:53.977    19:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:53.977    19:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:27:53.977    19:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:27:53.977    19:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:27:53.977    19:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:53.977   19:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:27:53.977   19:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:27:54.912    19:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:27:54.912    19:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:27:54.913    19:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:54.913    19:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:27:54.913    19:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:27:54.913    19:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:27:54.913    19:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:27:54.913    19:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:54.913   19:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:27:54.913   19:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:27:55.848    19:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:27:55.848    19:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:27:55.848    19:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:55.848    19:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:27:55.848    19:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:27:55.848    19:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:27:55.848    19:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:27:55.848    19:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:56.107   19:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:27:56.107   19:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:27:56.107  [2024-12-13 19:12:27.698993] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out
00:27:56.107  [2024-12-13 19:12:27.699042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:56.107  [2024-12-13 19:12:27.699057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:56.107  [2024-12-13 19:12:27.699068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:56.107  [2024-12-13 19:12:27.699077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:56.107  [2024-12-13 19:12:27.699085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:56.107  [2024-12-13 19:12:27.699093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:56.107  [2024-12-13 19:12:27.699102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:56.107  [2024-12-13 19:12:27.699110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:56.107  [2024-12-13 19:12:27.699119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:56.107  [2024-12-13 19:12:27.699126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:56.107  [2024-12-13 19:12:27.699134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc81250 is same with the state(6) to be set
00:27:56.107  [2024-12-13 19:12:27.708989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc81250 (9): Bad file descriptor
00:27:56.107  [2024-12-13 19:12:27.719006] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:27:56.107  [2024-12-13 19:12:27.719045] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:27:56.107  [2024-12-13 19:12:27.719051] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:27:56.107  [2024-12-13 19:12:27.719056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:27:56.107  [2024-12-13 19:12:27.719101] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:27:57.042    19:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:27:57.042    19:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:27:57.042    19:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:27:57.042    19:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:57.042    19:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:27:57.042    19:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:27:57.042    19:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:27:57.042  [2024-12-13 19:12:28.766328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110
00:27:57.042  [2024-12-13 19:12:28.766441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc81250 with addr=10.0.0.3, port=4420
00:27:57.042  [2024-12-13 19:12:28.766475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc81250 is same with the state(6) to be set
00:27:57.042  [2024-12-13 19:12:28.766534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc81250 (9): Bad file descriptor
00:27:57.042  [2024-12-13 19:12:28.767494] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress.
00:27:57.042  [2024-12-13 19:12:28.767603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:27:57.042  [2024-12-13 19:12:28.767629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:27:57.042  [2024-12-13 19:12:28.767660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:27:57.042  [2024-12-13 19:12:28.767680] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:27:57.042  [2024-12-13 19:12:28.767694] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:27:57.042  [2024-12-13 19:12:28.767705] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:27:57.042  [2024-12-13 19:12:28.767725] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:27:57.042  [2024-12-13 19:12:28.767737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:27:57.042    19:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:57.042   19:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:27:57.042   19:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:27:57.977  [2024-12-13 19:12:29.767811] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:27:57.977  [2024-12-13 19:12:29.767857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:27:57.977  [2024-12-13 19:12:29.767877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:27:57.977  [2024-12-13 19:12:29.767902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:27:57.978  [2024-12-13 19:12:29.767911] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state
00:27:57.978  [2024-12-13 19:12:29.767919] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:27:57.978  [2024-12-13 19:12:29.767924] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:27:57.978  [2024-12-13 19:12:29.767928] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:27:57.978  [2024-12-13 19:12:29.767958] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420
00:27:57.978  [2024-12-13 19:12:29.767992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:57.978  [2024-12-13 19:12:29.768007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:57.978  [2024-12-13 19:12:29.768019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:57.978  [2024-12-13 19:12:29.768028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:57.978  [2024-12-13 19:12:29.768036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:57.978  [2024-12-13 19:12:29.768044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:57.978  [2024-12-13 19:12:29.768052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:57.978  [2024-12-13 19:12:29.768060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:57.978  [2024-12-13 19:12:29.768069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:57.978  [2024-12-13 19:12:29.768077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:57.978  [2024-12-13 19:12:29.768086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state.
00:27:57.978  [2024-12-13 19:12:29.768445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc70970 (9): Bad file descriptor
00:27:57.978  [2024-12-13 19:12:29.769459] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command
00:27:57.978  [2024-12-13 19:12:29.769498] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register
00:27:57.978    19:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:27:57.978    19:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:27:57.978    19:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:57.978    19:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:27:57.978    19:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:27:57.978    19:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:27:57.978    19:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:27:58.236    19:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:58.236   19:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]]
00:27:58.236   19:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:27:58.236   19:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:27:58.236   19:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1
00:27:58.236    19:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:27:58.236    19:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:27:58.236    19:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:58.236    19:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:27:58.236    19:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:27:58.237    19:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:27:58.237    19:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:27:58.237    19:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:58.237   19:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]]
00:27:58.237   19:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:27:59.171    19:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:27:59.171    19:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:27:59.171    19:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:27:59.171    19:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:59.171    19:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:27:59.171    19:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:27:59.171    19:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:27:59.171    19:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:59.171   19:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]]
00:27:59.171   19:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:28:00.106  [2024-12-13 19:12:31.779337] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached
00:28:00.106  [2024-12-13 19:12:31.779371] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected
00:28:00.106  [2024-12-13 19:12:31.779404] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command
00:28:00.106  [2024-12-13 19:12:31.865420] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1
00:28:00.106  [2024-12-13 19:12:31.919720] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420
00:28:00.106  [2024-12-13 19:12:31.920416] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xc8a9c0:1 started.
00:28:00.106  [2024-12-13 19:12:31.922024] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0
00:28:00.106  [2024-12-13 19:12:31.922124] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0
00:28:00.106  [2024-12-13 19:12:31.922147] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0
00:28:00.106  [2024-12-13 19:12:31.922163] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done
00:28:00.106  [2024-12-13 19:12:31.922172] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again
00:28:00.106  [2024-12-13 19:12:31.928015] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xc8a9c0 was disconnected and freed. delete nvme_qpair.
00:28:00.364    19:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:28:00.364    19:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:28:00.364    19:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:00.364    19:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:28:00.364    19:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:28:00.364    19:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:28:00.364    19:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:28:00.364    19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:00.364   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]]
00:28:00.364   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT
00:28:00.364   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 113222
00:28:00.364   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 113222 ']'
00:28:00.364   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 113222
00:28:00.364    19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname
00:28:00.364   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:28:00.364    19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113222
00:28:00.364  killing process with pid 113222
00:28:00.364   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:28:00.364   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:28:00.364   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113222'
00:28:00.364   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 113222
00:28:00.364   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 113222
00:28:00.622   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini
00:28:00.622   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup
00:28:00.622   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync
00:28:00.622   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:28:00.622   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e
00:28:00.622   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20}
00:28:00.622   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:28:00.622  rmmod nvme_tcp
00:28:00.622  rmmod nvme_fabrics
00:28:00.622  rmmod nvme_keyring
00:28:00.622   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:28:00.622   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e
00:28:00.622   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0
00:28:00.622   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 113181 ']'
00:28:00.622   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 113181
00:28:00.622   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 113181 ']'
00:28:00.622   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 113181
00:28:00.622    19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname
00:28:00.622   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:28:00.622    19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113181
00:28:00.622  killing process with pid 113181
00:28:00.622   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:28:00.622   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:28:00.622   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113181'
00:28:00.622   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 113181
00:28:00.622   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 113181
00:28:00.881   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:28:00.881   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:28:00.881   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:28:00.881   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr
00:28:00.881   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:28:00.881   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save
00:28:00.881   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore
00:28:00.881   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:28:00.881   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:28:00.881   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:28:00.881   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:28:00.881   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:28:00.881   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:28:00.881   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:28:00.881   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:28:00.881   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:28:00.881   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:28:00.881   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:28:01.140   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:28:01.140   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:28:01.140   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:28:01.140   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:28:01.140   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns
00:28:01.140   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:28:01.140   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:28:01.140    19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:28:01.140   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0
00:28:01.140  
00:28:01.140  real	0m13.257s
00:28:01.140  user	0m23.362s
00:28:01.140  sys	0m1.655s
00:28:01.140   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:28:01.140   19:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:28:01.140  ************************************
00:28:01.140  END TEST nvmf_discovery_remove_ifc
00:28:01.140  ************************************
00:28:01.140   19:12:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp
00:28:01.140   19:12:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:28:01.140   19:12:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:28:01.140   19:12:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:28:01.140  ************************************
00:28:01.140  START TEST nvmf_identify_kernel_target
00:28:01.140  ************************************
00:28:01.140   19:12:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp
00:28:01.400  * Looking for test storage...
00:28:01.400  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:28:01.400    19:12:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:28:01.400     19:12:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version
00:28:01.400     19:12:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-:
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-:
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<'
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 ))
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:28:01.400     19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1
00:28:01.400     19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1
00:28:01.400     19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:28:01.400     19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1
00:28:01.400     19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2
00:28:01.400     19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2
00:28:01.400     19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:28:01.400     19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:28:01.400  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:01.400  		--rc genhtml_branch_coverage=1
00:28:01.400  		--rc genhtml_function_coverage=1
00:28:01.400  		--rc genhtml_legend=1
00:28:01.400  		--rc geninfo_all_blocks=1
00:28:01.400  		--rc geninfo_unexecuted_blocks=1
00:28:01.400  		
00:28:01.400  		'
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:28:01.400  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:01.400  		--rc genhtml_branch_coverage=1
00:28:01.400  		--rc genhtml_function_coverage=1
00:28:01.400  		--rc genhtml_legend=1
00:28:01.400  		--rc geninfo_all_blocks=1
00:28:01.400  		--rc geninfo_unexecuted_blocks=1
00:28:01.400  		
00:28:01.400  		'
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:28:01.400  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:01.400  		--rc genhtml_branch_coverage=1
00:28:01.400  		--rc genhtml_function_coverage=1
00:28:01.400  		--rc genhtml_legend=1
00:28:01.400  		--rc geninfo_all_blocks=1
00:28:01.400  		--rc geninfo_unexecuted_blocks=1
00:28:01.400  		
00:28:01.400  		'
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:28:01.400  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:01.400  		--rc genhtml_branch_coverage=1
00:28:01.400  		--rc genhtml_function_coverage=1
00:28:01.400  		--rc genhtml_legend=1
00:28:01.400  		--rc geninfo_all_blocks=1
00:28:01.400  		--rc geninfo_unexecuted_blocks=1
00:28:01.400  		
00:28:01.400  		'
00:28:01.400   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:28:01.400     19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:28:01.400     19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:28:01.400    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:28:01.401    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:28:01.401    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:28:01.401     19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob
00:28:01.401     19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:28:01.401     19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:28:01.401     19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:28:01.401      19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:01.401      19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:01.401      19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:01.401      19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH
00:28:01.401      19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:01.401    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0
00:28:01.401    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:28:01.401    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:28:01.401    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:28:01.401    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:28:01.401    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:28:01.401    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:28:01.401  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:28:01.401    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:28:01.401    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:28:01.401    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:28:01.401    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:28:01.401  Cannot find device "nvmf_init_br"
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:28:01.401  Cannot find device "nvmf_init_br2"
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:28:01.401  Cannot find device "nvmf_tgt_br"
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:28:01.401  Cannot find device "nvmf_tgt_br2"
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:28:01.401  Cannot find device "nvmf_init_br"
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:28:01.401  Cannot find device "nvmf_init_br2"
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:28:01.401  Cannot find device "nvmf_tgt_br"
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:28:01.401  Cannot find device "nvmf_tgt_br2"
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:28:01.401  Cannot find device "nvmf_br"
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true
00:28:01.401   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:28:01.660  Cannot find device "nvmf_init_if"
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:28:01.660  Cannot find device "nvmf_init_if2"
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:28:01.660  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:28:01.660  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:28:01.660   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:28:01.919   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:28:01.919   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:28:01.919   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:28:01.919   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:28:01.919   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:28:01.919   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:28:01.919   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:28:01.919   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:28:01.919   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:28:01.919  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:28:01.919  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms
00:28:01.919  
00:28:01.919  --- 10.0.0.3 ping statistics ---
00:28:01.919  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:28:01.919  rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms
00:28:01.919   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:28:01.919  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:28:01.919  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms
00:28:01.919  
00:28:01.919  --- 10.0.0.4 ping statistics ---
00:28:01.919  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:28:01.919  rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:28:01.920  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:28:01.920  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms
00:28:01.920  
00:28:01.920  --- 10.0.0.1 ping statistics ---
00:28:01.920  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:28:01.920  rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:28:01.920  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:28:01.920  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms
00:28:01.920  
00:28:01.920  --- 10.0.0.2 ping statistics ---
00:28:01.920  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:28:01.920  rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT
00:28:01.920    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip
00:28:01.920    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip
00:28:01.920    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:01.920    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:01.920    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:01.920    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:01.920    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:01.920    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:01.920    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:01.920    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:01.920    19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]]
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]]
00:28:01.920   19:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:28:02.178  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:28:02.178  Waiting for block devices as requested
00:28:02.437  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:28:02.437  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:28:02.437   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme*
00:28:02.437   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]]
00:28:02.437   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1
00:28:02.437   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:28:02.437   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:28:02.437   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:28:02.437   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1
00:28:02.437   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt
00:28:02.437   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1
00:28:02.437  No valid GPT data, bailing
00:28:02.437    19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt=
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme*
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]]
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]]
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2
00:28:02.696  No valid GPT data, bailing
00:28:02.696    19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt=
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme*
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]]
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]]
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt
00:28:02.696   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3
00:28:02.696  No valid GPT data, bailing
00:28:02.696    19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt=
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme*
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]]
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]]
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1
00:28:02.697  No valid GPT data, bailing
00:28:02.697    19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt=
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]]
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4
00:28:02.697   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/
00:28:02.956   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -a 10.0.0.1 -t tcp -s 4420
00:28:02.956  
00:28:02.956  Discovery Log Number of Records 2, Generation counter 2
00:28:02.956  =====Discovery Log Entry 0======
00:28:02.956  trtype:  tcp
00:28:02.956  adrfam:  ipv4
00:28:02.956  subtype: current discovery subsystem
00:28:02.956  treq:    not specified, sq flow control disable supported
00:28:02.956  portid:  1
00:28:02.956  trsvcid: 4420
00:28:02.956  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:28:02.956  traddr:  10.0.0.1
00:28:02.956  eflags:  none
00:28:02.956  sectype: none
00:28:02.956  =====Discovery Log Entry 1======
00:28:02.956  trtype:  tcp
00:28:02.956  adrfam:  ipv4
00:28:02.956  subtype: nvme subsystem
00:28:02.956  treq:    not specified, sq flow control disable supported
00:28:02.956  portid:  1
00:28:02.956  trsvcid: 4420
00:28:02.956  subnqn:  nqn.2016-06.io.spdk:testnqn
00:28:02.956  traddr:  10.0.0.1
00:28:02.956  eflags:  none
00:28:02.956  sectype: none
00:28:02.956   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r '	trtype:tcp 	adrfam:IPv4 	traddr:10.0.0.1
00:28:02.956  	trsvcid:4420 	subnqn:nqn.2014-08.org.nvmexpress.discovery'
00:28:02.956  =====================================================
00:28:02.956  NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery
00:28:02.956  =====================================================
00:28:02.956  Controller Capabilities/Features
00:28:02.956  ================================
00:28:02.956  Vendor ID:                             0000
00:28:02.956  Subsystem Vendor ID:                   0000
00:28:02.956  Serial Number:                         90b18e16e455b301869f
00:28:02.956  Model Number:                          Linux
00:28:02.956  Firmware Version:                      6.8.9-20
00:28:02.956  Recommended Arb Burst:                 0
00:28:02.956  IEEE OUI Identifier:                   00 00 00
00:28:02.956  Multi-path I/O
00:28:02.956    May have multiple subsystem ports:   No
00:28:02.956    May have multiple controllers:       No
00:28:02.956    Associated with SR-IOV VF:           No
00:28:02.956  Max Data Transfer Size:                Unlimited
00:28:02.956  Max Number of Namespaces:              0
00:28:02.956  Max Number of I/O Queues:              1024
00:28:02.956  NVMe Specification Version (VS):       1.3
00:28:02.956  NVMe Specification Version (Identify): 1.3
00:28:02.956  Maximum Queue Entries:                 1024
00:28:02.956  Contiguous Queues Required:            No
00:28:02.956  Arbitration Mechanisms Supported
00:28:02.956    Weighted Round Robin:                Not Supported
00:28:02.956    Vendor Specific:                     Not Supported
00:28:02.956  Reset Timeout:                         7500 ms
00:28:02.956  Doorbell Stride:                       4 bytes
00:28:02.956  NVM Subsystem Reset:                   Not Supported
00:28:02.956  Command Sets Supported
00:28:02.956    NVM Command Set:                     Supported
00:28:02.956  Boot Partition:                        Not Supported
00:28:02.956  Memory Page Size Minimum:              4096 bytes
00:28:02.956  Memory Page Size Maximum:              4096 bytes
00:28:02.956  Persistent Memory Region:              Not Supported
00:28:02.956  Optional Asynchronous Events Supported
00:28:02.956    Namespace Attribute Notices:         Not Supported
00:28:02.956    Firmware Activation Notices:         Not Supported
00:28:02.956    ANA Change Notices:                  Not Supported
00:28:02.956    PLE Aggregate Log Change Notices:    Not Supported
00:28:02.956    LBA Status Info Alert Notices:       Not Supported
00:28:02.956    EGE Aggregate Log Change Notices:    Not Supported
00:28:02.956    Normal NVM Subsystem Shutdown event: Not Supported
00:28:02.956    Zone Descriptor Change Notices:      Not Supported
00:28:02.956    Discovery Log Change Notices:        Supported
00:28:02.956  Controller Attributes
00:28:02.956    128-bit Host Identifier:             Not Supported
00:28:02.956    Non-Operational Permissive Mode:     Not Supported
00:28:02.956    NVM Sets:                            Not Supported
00:28:02.956    Read Recovery Levels:                Not Supported
00:28:02.956    Endurance Groups:                    Not Supported
00:28:02.956    Predictable Latency Mode:            Not Supported
00:28:02.956    Traffic Based Keep ALive:            Not Supported
00:28:02.956    Namespace Granularity:               Not Supported
00:28:02.956    SQ Associations:                     Not Supported
00:28:02.956    UUID List:                           Not Supported
00:28:02.956    Multi-Domain Subsystem:              Not Supported
00:28:02.956    Fixed Capacity Management:           Not Supported
00:28:02.956    Variable Capacity Management:        Not Supported
00:28:02.956    Delete Endurance Group:              Not Supported
00:28:02.956    Delete NVM Set:                      Not Supported
00:28:02.956    Extended LBA Formats Supported:      Not Supported
00:28:02.956    Flexible Data Placement Supported:   Not Supported
00:28:02.956  
00:28:02.956  Controller Memory Buffer Support
00:28:02.956  ================================
00:28:02.956  Supported:                             No
00:28:02.956  
00:28:02.956  Persistent Memory Region Support
00:28:02.956  ================================
00:28:02.956  Supported:                             No
00:28:02.956  
00:28:02.956  Admin Command Set Attributes
00:28:02.956  ============================
00:28:02.956  Security Send/Receive:                 Not Supported
00:28:02.956  Format NVM:                            Not Supported
00:28:02.956  Firmware Activate/Download:            Not Supported
00:28:02.956  Namespace Management:                  Not Supported
00:28:02.956  Device Self-Test:                      Not Supported
00:28:02.956  Directives:                            Not Supported
00:28:02.956  NVMe-MI:                               Not Supported
00:28:02.956  Virtualization Management:             Not Supported
00:28:02.956  Doorbell Buffer Config:                Not Supported
00:28:02.956  Get LBA Status Capability:             Not Supported
00:28:02.956  Command & Feature Lockdown Capability: Not Supported
00:28:02.956  Abort Command Limit:                   1
00:28:02.956  Async Event Request Limit:             1
00:28:02.956  Number of Firmware Slots:              N/A
00:28:02.956  Firmware Slot 1 Read-Only:             N/A
00:28:02.956  Firmware Activation Without Reset:     N/A
00:28:02.956  Multiple Update Detection Support:     N/A
00:28:02.956  Firmware Update Granularity:           No Information Provided
00:28:02.956  Per-Namespace SMART Log:               No
00:28:02.956  Asymmetric Namespace Access Log Page:  Not Supported
00:28:02.956  Subsystem NQN:                         nqn.2014-08.org.nvmexpress.discovery
00:28:02.956  Command Effects Log Page:              Not Supported
00:28:02.956  Get Log Page Extended Data:            Supported
00:28:02.956  Telemetry Log Pages:                   Not Supported
00:28:02.956  Persistent Event Log Pages:            Not Supported
00:28:02.956  Supported Log Pages Log Page:          May Support
00:28:02.956  Commands Supported & Effects Log Page: Not Supported
00:28:02.956  Feature Identifiers & Effects Log Page:May Support
00:28:02.956  NVMe-MI Commands & Effects Log Page:   May Support
00:28:02.956  Data Area 4 for Telemetry Log:         Not Supported
00:28:02.956  Error Log Page Entries Supported:      1
00:28:02.956  Keep Alive:                            Not Supported
00:28:02.956  
00:28:02.956  NVM Command Set Attributes
00:28:02.956  ==========================
00:28:02.956  Submission Queue Entry Size
00:28:02.956    Max:                       1
00:28:02.956    Min:                       1
00:28:02.956  Completion Queue Entry Size
00:28:02.956    Max:                       1
00:28:02.956    Min:                       1
00:28:02.956  Number of Namespaces:        0
00:28:02.956  Compare Command:             Not Supported
00:28:02.956  Write Uncorrectable Command: Not Supported
00:28:02.956  Dataset Management Command:  Not Supported
00:28:02.956  Write Zeroes Command:        Not Supported
00:28:02.956  Set Features Save Field:     Not Supported
00:28:02.956  Reservations:                Not Supported
00:28:02.956  Timestamp:                   Not Supported
00:28:02.956  Copy:                        Not Supported
00:28:02.956  Volatile Write Cache:        Not Present
00:28:02.956  Atomic Write Unit (Normal):  1
00:28:02.956  Atomic Write Unit (PFail):   1
00:28:02.956  Atomic Compare & Write Unit: 1
00:28:02.956  Fused Compare & Write:       Not Supported
00:28:02.956  Scatter-Gather List
00:28:02.957    SGL Command Set:           Supported
00:28:02.957    SGL Keyed:                 Not Supported
00:28:02.957    SGL Bit Bucket Descriptor: Not Supported
00:28:02.957    SGL Metadata Pointer:      Not Supported
00:28:02.957    Oversized SGL:             Not Supported
00:28:02.957    SGL Metadata Address:      Not Supported
00:28:02.957    SGL Offset:                Supported
00:28:02.957    Transport SGL Data Block:  Not Supported
00:28:02.957  Replay Protected Memory Block:  Not Supported
00:28:02.957  
00:28:02.957  Firmware Slot Information
00:28:02.957  =========================
00:28:02.957  Active slot:                 0
00:28:02.957  
00:28:02.957  
00:28:02.957  Error Log
00:28:02.957  =========
00:28:02.957  
00:28:02.957  Active Namespaces
00:28:02.957  =================
00:28:02.957  Discovery Log Page
00:28:02.957  ==================
00:28:02.957  Generation Counter:                    2
00:28:02.957  Number of Records:                     2
00:28:02.957  Record Format:                         0
00:28:02.957  
00:28:02.957  Discovery Log Entry 0
00:28:02.957  ----------------------
00:28:02.957  Transport Type:                        3 (TCP)
00:28:02.957  Address Family:                        1 (IPv4)
00:28:02.957  Subsystem Type:                        3 (Current Discovery Subsystem)
00:28:02.957  Entry Flags:
00:28:02.957    Duplicate Returned Information:			0
00:28:02.957    Explicit Persistent Connection Support for Discovery: 0
00:28:02.957  Transport Requirements:
00:28:02.957    Secure Channel:                      Not Specified
00:28:02.957  Port ID:                               1 (0x0001)
00:28:02.957  Controller ID:                         65535 (0xffff)
00:28:02.957  Admin Max SQ Size:                     32
00:28:02.957  Transport Service Identifier:          4420
00:28:02.957  NVM Subsystem Qualified Name:          nqn.2014-08.org.nvmexpress.discovery
00:28:02.957  Transport Address:                     10.0.0.1
00:28:02.957  Discovery Log Entry 1
00:28:02.957  ----------------------
00:28:02.957  Transport Type:                        3 (TCP)
00:28:02.957  Address Family:                        1 (IPv4)
00:28:02.957  Subsystem Type:                        2 (NVM Subsystem)
00:28:02.957  Entry Flags:
00:28:02.957    Duplicate Returned Information:			0
00:28:02.957    Explicit Persistent Connection Support for Discovery: 0
00:28:02.957  Transport Requirements:
00:28:02.957    Secure Channel:                      Not Specified
00:28:02.957  Port ID:                               1 (0x0001)
00:28:02.957  Controller ID:                         65535 (0xffff)
00:28:02.957  Admin Max SQ Size:                     32
00:28:02.957  Transport Service Identifier:          4420
00:28:02.957  NVM Subsystem Qualified Name:          nqn.2016-06.io.spdk:testnqn
00:28:02.957  Transport Address:                     10.0.0.1
00:28:02.957   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r '	trtype:tcp 	adrfam:IPv4 	traddr:10.0.0.1 	trsvcid:4420 	subnqn:nqn.2016-06.io.spdk:testnqn'
00:28:03.216  get_feature(0x01) failed
00:28:03.216  get_feature(0x02) failed
00:28:03.216  get_feature(0x04) failed
00:28:03.216  =====================================================
00:28:03.216  NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn
00:28:03.216  =====================================================
00:28:03.216  Controller Capabilities/Features
00:28:03.216  ================================
00:28:03.216  Vendor ID:                             0000
00:28:03.216  Subsystem Vendor ID:                   0000
00:28:03.216  Serial Number:                         a26636de2ed3a44c0cc3
00:28:03.216  Model Number:                          SPDK-nqn.2016-06.io.spdk:testnqn
00:28:03.216  Firmware Version:                      6.8.9-20
00:28:03.216  Recommended Arb Burst:                 6
00:28:03.216  IEEE OUI Identifier:                   00 00 00
00:28:03.216  Multi-path I/O
00:28:03.216    May have multiple subsystem ports:   Yes
00:28:03.216    May have multiple controllers:       Yes
00:28:03.216    Associated with SR-IOV VF:           No
00:28:03.216  Max Data Transfer Size:                Unlimited
00:28:03.216  Max Number of Namespaces:              1024
00:28:03.216  Max Number of I/O Queues:              128
00:28:03.216  NVMe Specification Version (VS):       1.3
00:28:03.216  NVMe Specification Version (Identify): 1.3
00:28:03.216  Maximum Queue Entries:                 1024
00:28:03.216  Contiguous Queues Required:            No
00:28:03.216  Arbitration Mechanisms Supported
00:28:03.216    Weighted Round Robin:                Not Supported
00:28:03.216    Vendor Specific:                     Not Supported
00:28:03.216  Reset Timeout:                         7500 ms
00:28:03.216  Doorbell Stride:                       4 bytes
00:28:03.216  NVM Subsystem Reset:                   Not Supported
00:28:03.216  Command Sets Supported
00:28:03.216    NVM Command Set:                     Supported
00:28:03.216  Boot Partition:                        Not Supported
00:28:03.216  Memory Page Size Minimum:              4096 bytes
00:28:03.216  Memory Page Size Maximum:              4096 bytes
00:28:03.216  Persistent Memory Region:              Not Supported
00:28:03.216  Optional Asynchronous Events Supported
00:28:03.216    Namespace Attribute Notices:         Supported
00:28:03.216    Firmware Activation Notices:         Not Supported
00:28:03.216    ANA Change Notices:                  Supported
00:28:03.216    PLE Aggregate Log Change Notices:    Not Supported
00:28:03.216    LBA Status Info Alert Notices:       Not Supported
00:28:03.216    EGE Aggregate Log Change Notices:    Not Supported
00:28:03.216    Normal NVM Subsystem Shutdown event: Not Supported
00:28:03.216    Zone Descriptor Change Notices:      Not Supported
00:28:03.216    Discovery Log Change Notices:        Not Supported
00:28:03.216  Controller Attributes
00:28:03.216    128-bit Host Identifier:             Supported
00:28:03.216    Non-Operational Permissive Mode:     Not Supported
00:28:03.216    NVM Sets:                            Not Supported
00:28:03.216    Read Recovery Levels:                Not Supported
00:28:03.216    Endurance Groups:                    Not Supported
00:28:03.216    Predictable Latency Mode:            Not Supported
00:28:03.216    Traffic Based Keep ALive:            Supported
00:28:03.216    Namespace Granularity:               Not Supported
00:28:03.216    SQ Associations:                     Not Supported
00:28:03.216    UUID List:                           Not Supported
00:28:03.216    Multi-Domain Subsystem:              Not Supported
00:28:03.216    Fixed Capacity Management:           Not Supported
00:28:03.216    Variable Capacity Management:        Not Supported
00:28:03.216    Delete Endurance Group:              Not Supported
00:28:03.216    Delete NVM Set:                      Not Supported
00:28:03.216    Extended LBA Formats Supported:      Not Supported
00:28:03.216    Flexible Data Placement Supported:   Not Supported
00:28:03.216  
00:28:03.216  Controller Memory Buffer Support
00:28:03.216  ================================
00:28:03.216  Supported:                             No
00:28:03.216  
00:28:03.216  Persistent Memory Region Support
00:28:03.216  ================================
00:28:03.216  Supported:                             No
00:28:03.216  
00:28:03.216  Admin Command Set Attributes
00:28:03.216  ============================
00:28:03.216  Security Send/Receive:                 Not Supported
00:28:03.216  Format NVM:                            Not Supported
00:28:03.216  Firmware Activate/Download:            Not Supported
00:28:03.216  Namespace Management:                  Not Supported
00:28:03.216  Device Self-Test:                      Not Supported
00:28:03.216  Directives:                            Not Supported
00:28:03.216  NVMe-MI:                               Not Supported
00:28:03.216  Virtualization Management:             Not Supported
00:28:03.216  Doorbell Buffer Config:                Not Supported
00:28:03.216  Get LBA Status Capability:             Not Supported
00:28:03.216  Command & Feature Lockdown Capability: Not Supported
00:28:03.216  Abort Command Limit:                   4
00:28:03.216  Async Event Request Limit:             4
00:28:03.216  Number of Firmware Slots:              N/A
00:28:03.216  Firmware Slot 1 Read-Only:             N/A
00:28:03.216  Firmware Activation Without Reset:     N/A
00:28:03.216  Multiple Update Detection Support:     N/A
00:28:03.216  Firmware Update Granularity:           No Information Provided
00:28:03.216  Per-Namespace SMART Log:               Yes
00:28:03.216  Asymmetric Namespace Access Log Page:  Supported
00:28:03.216  ANA Transition Time                 :  10 sec
00:28:03.216  
00:28:03.216  Asymmetric Namespace Access Capabilities
00:28:03.216    ANA Optimized State               : Supported
00:28:03.216    ANA Non-Optimized State           : Supported
00:28:03.216    ANA Inaccessible State            : Supported
00:28:03.216    ANA Persistent Loss State         : Supported
00:28:03.216    ANA Change State                  : Supported
00:28:03.216    ANAGRPID is not changed           : No
00:28:03.216    Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported
00:28:03.216  
00:28:03.216  ANA Group Identifier Maximum        : 128
00:28:03.216  Number of ANA Group Identifiers     : 128
00:28:03.216  Max Number of Allowed Namespaces    : 1024
00:28:03.216  Subsystem NQN:                         nqn.2016-06.io.spdk:testnqn
00:28:03.216  Command Effects Log Page:              Supported
00:28:03.216  Get Log Page Extended Data:            Supported
00:28:03.216  Telemetry Log Pages:                   Not Supported
00:28:03.216  Persistent Event Log Pages:            Not Supported
00:28:03.216  Supported Log Pages Log Page:          May Support
00:28:03.216  Commands Supported & Effects Log Page: Not Supported
00:28:03.216  Feature Identifiers & Effects Log Page:May Support
00:28:03.216  NVMe-MI Commands & Effects Log Page:   May Support
00:28:03.216  Data Area 4 for Telemetry Log:         Not Supported
00:28:03.216  Error Log Page Entries Supported:      128
00:28:03.216  Keep Alive:                            Supported
00:28:03.216  Keep Alive Granularity:                1000 ms
00:28:03.216  
00:28:03.216  NVM Command Set Attributes
00:28:03.216  ==========================
00:28:03.216  Submission Queue Entry Size
00:28:03.216    Max:                       64
00:28:03.216    Min:                       64
00:28:03.216  Completion Queue Entry Size
00:28:03.216    Max:                       16
00:28:03.216    Min:                       16
00:28:03.216  Number of Namespaces:        1024
00:28:03.216  Compare Command:             Not Supported
00:28:03.216  Write Uncorrectable Command: Not Supported
00:28:03.216  Dataset Management Command:  Supported
00:28:03.216  Write Zeroes Command:        Supported
00:28:03.216  Set Features Save Field:     Not Supported
00:28:03.216  Reservations:                Not Supported
00:28:03.216  Timestamp:                   Not Supported
00:28:03.216  Copy:                        Not Supported
00:28:03.216  Volatile Write Cache:        Present
00:28:03.216  Atomic Write Unit (Normal):  1
00:28:03.216  Atomic Write Unit (PFail):   1
00:28:03.216  Atomic Compare & Write Unit: 1
00:28:03.216  Fused Compare & Write:       Not Supported
00:28:03.216  Scatter-Gather List
00:28:03.216    SGL Command Set:           Supported
00:28:03.216    SGL Keyed:                 Not Supported
00:28:03.216    SGL Bit Bucket Descriptor: Not Supported
00:28:03.216    SGL Metadata Pointer:      Not Supported
00:28:03.216    Oversized SGL:             Not Supported
00:28:03.216    SGL Metadata Address:      Not Supported
00:28:03.216    SGL Offset:                Supported
00:28:03.216    Transport SGL Data Block:  Not Supported
00:28:03.216  Replay Protected Memory Block:  Not Supported
00:28:03.216  
00:28:03.216  Firmware Slot Information
00:28:03.216  =========================
00:28:03.216  Active slot:                 0
00:28:03.216  
00:28:03.216  Asymmetric Namespace Access
00:28:03.216  ===========================
00:28:03.216  Change Count                    : 0
00:28:03.216  Number of ANA Group Descriptors : 1
00:28:03.216  ANA Group Descriptor            : 0
00:28:03.216    ANA Group ID                  : 1
00:28:03.216    Number of NSID Values         : 1
00:28:03.216    Change Count                  : 0
00:28:03.216    ANA State                     : 1
00:28:03.216    Namespace Identifier          : 1
00:28:03.216  
00:28:03.216  Commands Supported and Effects
00:28:03.216  ==============================
00:28:03.216  Admin Commands
00:28:03.216  --------------
00:28:03.216                    Get Log Page (02h): Supported 
00:28:03.216                        Identify (06h): Supported 
00:28:03.217                           Abort (08h): Supported 
00:28:03.217                    Set Features (09h): Supported 
00:28:03.217                    Get Features (0Ah): Supported 
00:28:03.217      Asynchronous Event Request (0Ch): Supported 
00:28:03.217                      Keep Alive (18h): Supported 
00:28:03.217  I/O Commands
00:28:03.217  ------------
00:28:03.217                           Flush (00h): Supported 
00:28:03.217                           Write (01h): Supported LBA-Change 
00:28:03.217                            Read (02h): Supported 
00:28:03.217                    Write Zeroes (08h): Supported LBA-Change 
00:28:03.217              Dataset Management (09h): Supported 
00:28:03.217  
00:28:03.217  Error Log
00:28:03.217  =========
00:28:03.217  Entry: 0
00:28:03.217  Error Count:            0x3
00:28:03.217  Submission Queue Id:    0x0
00:28:03.217  Command Id:             0x5
00:28:03.217  Phase Bit:              0
00:28:03.217  Status Code:            0x2
00:28:03.217  Status Code Type:       0x0
00:28:03.217  Do Not Retry:           1
00:28:03.217  Error Location:         0x28
00:28:03.217  LBA:                    0x0
00:28:03.217  Namespace:              0x0
00:28:03.217  Vendor Log Page:        0x0
00:28:03.217  -----------
00:28:03.217  Entry: 1
00:28:03.217  Error Count:            0x2
00:28:03.217  Submission Queue Id:    0x0
00:28:03.217  Command Id:             0x5
00:28:03.217  Phase Bit:              0
00:28:03.217  Status Code:            0x2
00:28:03.217  Status Code Type:       0x0
00:28:03.217  Do Not Retry:           1
00:28:03.217  Error Location:         0x28
00:28:03.217  LBA:                    0x0
00:28:03.217  Namespace:              0x0
00:28:03.217  Vendor Log Page:        0x0
00:28:03.217  -----------
00:28:03.217  Entry: 2
00:28:03.217  Error Count:            0x1
00:28:03.217  Submission Queue Id:    0x0
00:28:03.217  Command Id:             0x4
00:28:03.217  Phase Bit:              0
00:28:03.217  Status Code:            0x2
00:28:03.217  Status Code Type:       0x0
00:28:03.217  Do Not Retry:           1
00:28:03.217  Error Location:         0x28
00:28:03.217  LBA:                    0x0
00:28:03.217  Namespace:              0x0
00:28:03.217  Vendor Log Page:        0x0
00:28:03.217  
00:28:03.217  Number of Queues
00:28:03.217  ================
00:28:03.217  Number of I/O Submission Queues:      128
00:28:03.217  Number of I/O Completion Queues:      128
00:28:03.217  
00:28:03.217  ZNS Specific Controller Data
00:28:03.217  ============================
00:28:03.217  Zone Append Size Limit:      0
00:28:03.217  
00:28:03.217  
00:28:03.217  Active Namespaces
00:28:03.217  =================
00:28:03.217  get_feature(0x05) failed
00:28:03.217  Namespace ID:1
00:28:03.217  Command Set Identifier:                NVM (00h)
00:28:03.217  Deallocate:                            Supported
00:28:03.217  Deallocated/Unwritten Error:           Not Supported
00:28:03.217  Deallocated Read Value:                Unknown
00:28:03.217  Deallocate in Write Zeroes:            Not Supported
00:28:03.217  Deallocated Guard Field:               0xFFFF
00:28:03.217  Flush:                                 Supported
00:28:03.217  Reservation:                           Not Supported
00:28:03.217  Namespace Sharing Capabilities:        Multiple Controllers
00:28:03.217  Size (in LBAs):                        1310720 (5GiB)
00:28:03.217  Capacity (in LBAs):                    1310720 (5GiB)
00:28:03.217  Utilization (in LBAs):                 1310720 (5GiB)
00:28:03.217  UUID:                                  ee95cd85-821a-4a9c-a4ff-d7e87f86f6c5
00:28:03.217  Thin Provisioning:                     Not Supported
00:28:03.217  Per-NS Atomic Units:                   Yes
00:28:03.217    Atomic Boundary Size (Normal):       0
00:28:03.217    Atomic Boundary Size (PFail):        0
00:28:03.217    Atomic Boundary Offset:              0
00:28:03.217  NGUID/EUI64 Never Reused:              No
00:28:03.217  ANA group ID:                          1
00:28:03.217  Namespace Write Protected:             No
00:28:03.217  Number of LBA Formats:                 1
00:28:03.217  Current LBA Format:                    LBA Format #00
00:28:03.217  LBA Format #00: Data Size:  4096  Metadata Size:     0
00:28:03.217  
00:28:03.217   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini
00:28:03.217   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup
00:28:03.217   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync
00:28:03.217   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:28:03.217   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e
00:28:03.217   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20}
00:28:03.217   19:12:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:28:03.217  rmmod nvme_tcp
00:28:03.217  rmmod nvme_fabrics
00:28:03.217   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:28:03.217   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e
00:28:03.217   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0
00:28:03.217   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']'
00:28:03.217   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:28:03.217   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:28:03.217   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:28:03.217   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr
00:28:03.217   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save
00:28:03.217   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:28:03.217   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore
00:28:03.477   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:28:03.477   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:28:03.478   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:28:03.478   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:28:03.478   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:28:03.478   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:28:03.478   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:28:03.478   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:28:03.478   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:28:03.478   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:28:03.478   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:28:03.478   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:28:03.478   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:28:03.478   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:28:03.478   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:28:03.478   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns
00:28:03.478   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:28:03.478   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:28:03.478    19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:28:03.478   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0
00:28:03.478   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target
00:28:03.478   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]]
00:28:03.478   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0
00:28:03.478   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn
00:28:03.478   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1
00:28:03.737   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1
00:28:03.737   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn
00:28:03.737   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*)
00:28:03.738   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet
00:28:03.738   19:12:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:28:04.305  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:28:04.305  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:28:04.563  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:28:04.563  
00:28:04.563  real	0m3.325s
00:28:04.563  user	0m1.171s
00:28:04.563  sys	0m1.477s
00:28:04.563   19:12:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable
00:28:04.563   19:12:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x
00:28:04.563  ************************************
00:28:04.563  END TEST nvmf_identify_kernel_target
00:28:04.563  ************************************
00:28:04.563   19:12:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp
00:28:04.563   19:12:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:28:04.563   19:12:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:28:04.563   19:12:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:28:04.563  ************************************
00:28:04.563  START TEST nvmf_auth_host
00:28:04.563  ************************************
00:28:04.563   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp
00:28:04.563  * Looking for test storage...
00:28:04.563  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:28:04.563    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:28:04.564     19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:28:04.564     19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version
00:28:04.822    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:28:04.822    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:28:04.822    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l
00:28:04.822    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l
00:28:04.822    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-:
00:28:04.822    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1
00:28:04.822    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-:
00:28:04.822    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2
00:28:04.822    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<'
00:28:04.822    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2
00:28:04.822    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1
00:28:04.822    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:28:04.822    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in
00:28:04.822    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1
00:28:04.822    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 ))
00:28:04.822    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:28:04.822     19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1
00:28:04.822     19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1
00:28:04.822     19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:28:04.822     19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1
00:28:04.822    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1
00:28:04.822     19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2
00:28:04.822     19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2
00:28:04.822     19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:28:04.822     19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:28:04.823  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:04.823  		--rc genhtml_branch_coverage=1
00:28:04.823  		--rc genhtml_function_coverage=1
00:28:04.823  		--rc genhtml_legend=1
00:28:04.823  		--rc geninfo_all_blocks=1
00:28:04.823  		--rc geninfo_unexecuted_blocks=1
00:28:04.823  		
00:28:04.823  		'
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:28:04.823  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:04.823  		--rc genhtml_branch_coverage=1
00:28:04.823  		--rc genhtml_function_coverage=1
00:28:04.823  		--rc genhtml_legend=1
00:28:04.823  		--rc geninfo_all_blocks=1
00:28:04.823  		--rc geninfo_unexecuted_blocks=1
00:28:04.823  		
00:28:04.823  		'
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:28:04.823  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:04.823  		--rc genhtml_branch_coverage=1
00:28:04.823  		--rc genhtml_function_coverage=1
00:28:04.823  		--rc genhtml_legend=1
00:28:04.823  		--rc geninfo_all_blocks=1
00:28:04.823  		--rc geninfo_unexecuted_blocks=1
00:28:04.823  		
00:28:04.823  		'
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:28:04.823  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:04.823  		--rc genhtml_branch_coverage=1
00:28:04.823  		--rc genhtml_function_coverage=1
00:28:04.823  		--rc genhtml_legend=1
00:28:04.823  		--rc geninfo_all_blocks=1
00:28:04.823  		--rc geninfo_unexecuted_blocks=1
00:28:04.823  		
00:28:04.823  		'
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:28:04.823     19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:28:04.823     19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:28:04.823     19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob
00:28:04.823     19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:28:04.823     19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:28:04.823     19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:28:04.823      19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:04.823      19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:04.823      19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:04.823      19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH
00:28:04.823      19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:28:04.823  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512")
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192")
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=()
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=()
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:28:04.823    19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:28:04.823   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:28:04.824  Cannot find device "nvmf_init_br"
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:28:04.824  Cannot find device "nvmf_init_br2"
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:28:04.824  Cannot find device "nvmf_tgt_br"
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:28:04.824  Cannot find device "nvmf_tgt_br2"
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:28:04.824  Cannot find device "nvmf_init_br"
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:28:04.824  Cannot find device "nvmf_init_br2"
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:28:04.824  Cannot find device "nvmf_tgt_br"
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:28:04.824  Cannot find device "nvmf_tgt_br2"
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:28:04.824  Cannot find device "nvmf_br"
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:28:04.824  Cannot find device "nvmf_init_if"
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:28:04.824  Cannot find device "nvmf_init_if2"
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:28:04.824  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true
00:28:04.824   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:28:05.083  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:28:05.083  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:28:05.083  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms
00:28:05.083  
00:28:05.083  --- 10.0.0.3 ping statistics ---
00:28:05.083  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:28:05.083  rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:28:05.083  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:28:05.083  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms
00:28:05.083  
00:28:05.083  --- 10.0.0.4 ping statistics ---
00:28:05.083  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:28:05.083  rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:28:05.083  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:28:05.083  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms
00:28:05.083  
00:28:05.083  --- 10.0.0.1 ping statistics ---
00:28:05.083  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:28:05.083  rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:28:05.083  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:28:05.083  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms
00:28:05.083  
00:28:05.083  --- 10.0.0.2 ping statistics ---
00:28:05.083  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:28:05.083  rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0
00:28:05.083   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:28:05.084   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:28:05.084   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:28:05.084   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:28:05.084   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:28:05.084   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:28:05.084   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:28:05.343   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth
00:28:05.343   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:28:05.343   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable
00:28:05.343   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:05.343   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=114210
00:28:05.343   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 114210
00:28:05.343   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 114210 ']'
00:28:05.343   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:28:05.343   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth
00:28:05.343   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:05.343   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:28:05.343   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:05.343   19:12:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:05.601   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:05.601   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0
00:28:05.601   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:28:05.601   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable
00:28:05.601   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:05.601   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:28:05.601   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT
00:28:05.601    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32
00:28:05.601    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:28:05.601    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:28:05.601    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:28:05.601    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null
00:28:05.602    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32
00:28:05.602     19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:28:05.602    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ef4ec01b23a48aa349a386867f4010d5
00:28:05.602     19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX
00:28:05.602    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.CrY
00:28:05.602    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ef4ec01b23a48aa349a386867f4010d5 0
00:28:05.602    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ef4ec01b23a48aa349a386867f4010d5 0
00:28:05.602    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:28:05.602    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:28:05.602    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ef4ec01b23a48aa349a386867f4010d5
00:28:05.602    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0
00:28:05.602    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.CrY
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.CrY
00:28:05.861   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.CrY
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64
00:28:05.861     19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8b499303d8e5619ffc62f56231eed7d20ef8975ec527d0fbd3a691189bbb5d68
00:28:05.861     19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.DyK
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8b499303d8e5619ffc62f56231eed7d20ef8975ec527d0fbd3a691189bbb5d68 3
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8b499303d8e5619ffc62f56231eed7d20ef8975ec527d0fbd3a691189bbb5d68 3
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8b499303d8e5619ffc62f56231eed7d20ef8975ec527d0fbd3a691189bbb5d68
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.DyK
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.DyK
00:28:05.861   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.DyK
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48
00:28:05.861     19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ae5fbe0f07f70e751a7ccf683fd4dd0fd9df6d9bdc42a197
00:28:05.861     19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.7JM
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ae5fbe0f07f70e751a7ccf683fd4dd0fd9df6d9bdc42a197 0
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ae5fbe0f07f70e751a7ccf683fd4dd0fd9df6d9bdc42a197 0
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ae5fbe0f07f70e751a7ccf683fd4dd0fd9df6d9bdc42a197
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.7JM
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.7JM
00:28:05.861   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.7JM
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48
00:28:05.861     19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=65b00a0cafe099951ff3ec13c8f861877a027fa87a815984
00:28:05.861     19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Awh
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 65b00a0cafe099951ff3ec13c8f861877a027fa87a815984 2
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 65b00a0cafe099951ff3ec13c8f861877a027fa87a815984 2
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=65b00a0cafe099951ff3ec13c8f861877a027fa87a815984
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Awh
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Awh
00:28:05.861   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Awh
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32
00:28:05.861     19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1d6acb2fb6f84ac76b1309dbf6a4bbd3
00:28:05.861     19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.FDj
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1d6acb2fb6f84ac76b1309dbf6a4bbd3 1
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1d6acb2fb6f84ac76b1309dbf6a4bbd3 1
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1d6acb2fb6f84ac76b1309dbf6a4bbd3
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1
00:28:05.861    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.FDj
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.FDj
00:28:06.120   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.FDj
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32
00:28:06.120     19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=564cec31eb52842f79f1325606dff1f8
00:28:06.120     19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.yDD
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 564cec31eb52842f79f1325606dff1f8 1
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 564cec31eb52842f79f1325606dff1f8 1
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=564cec31eb52842f79f1325606dff1f8
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.yDD
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.yDD
00:28:06.120   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.yDD
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48
00:28:06.120     19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6a07a05bba410bbaf6dae4ab2b2d9ea2c08ac0f9662651c5
00:28:06.120     19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.H7A
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6a07a05bba410bbaf6dae4ab2b2d9ea2c08ac0f9662651c5 2
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6a07a05bba410bbaf6dae4ab2b2d9ea2c08ac0f9662651c5 2
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6a07a05bba410bbaf6dae4ab2b2d9ea2c08ac0f9662651c5
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.H7A
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.H7A
00:28:06.120   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.H7A
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32
00:28:06.120     19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7ea4cd4355e8f921febc44234c4d5e95
00:28:06.120     19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.d2W
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7ea4cd4355e8f921febc44234c4d5e95 0
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7ea4cd4355e8f921febc44234c4d5e95 0
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7ea4cd4355e8f921febc44234c4d5e95
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.d2W
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.d2W
00:28:06.120   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.d2W
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64
00:28:06.120     19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b88cf2e43f6dd5fda9a0d5adc04a84599fef632d8f0c61759a40974a901fb37a
00:28:06.120     19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.C02
00:28:06.120    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b88cf2e43f6dd5fda9a0d5adc04a84599fef632d8f0c61759a40974a901fb37a 3
00:28:06.121    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b88cf2e43f6dd5fda9a0d5adc04a84599fef632d8f0c61759a40974a901fb37a 3
00:28:06.121    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:28:06.121    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:28:06.121    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b88cf2e43f6dd5fda9a0d5adc04a84599fef632d8f0c61759a40974a901fb37a
00:28:06.121    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3
00:28:06.121    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:28:06.379    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.C02
00:28:06.379    19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.C02
00:28:06.379   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.C02
00:28:06.379  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:28:06.379   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]=
00:28:06.379   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 114210
00:28:06.379   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 114210 ']'
00:28:06.379   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:28:06.379   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:06.379   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:28:06.379   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:06.379   19:12:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:06.638   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:06.638   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0
00:28:06.638   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}"
00:28:06.638   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.CrY
00:28:06.638   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:06.638   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:06.638   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:06.638   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.DyK ]]
00:28:06.638   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DyK
00:28:06.638   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:06.638   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:06.638   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:06.638   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}"
00:28:06.638   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.7JM
00:28:06.638   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:06.638   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:06.638   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:06.638   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Awh ]]
00:28:06.638   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Awh
00:28:06.638   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:06.638   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}"
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.FDj
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.yDD ]]
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.yDD
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}"
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.H7A
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.d2W ]]
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.d2W
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}"
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.C02
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]]
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init
00:28:06.639    19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip
00:28:06.639    19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:06.639    19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:06.639    19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:06.639    19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:06.639    19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:06.639    19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:06.639    19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:06.639    19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:06.639    19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:06.639    19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]]
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]]
00:28:06.639   19:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:28:06.899  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:28:06.899  Waiting for block devices as requested
00:28:07.174  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:28:07.174  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:28:07.754   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme*
00:28:07.754   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]]
00:28:07.754   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1
00:28:07.754   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:28:07.754   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:28:07.754   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:28:07.754   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1
00:28:07.754   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt
00:28:07.754   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1
00:28:07.754  No valid GPT data, bailing
00:28:07.754    19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt=
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme*
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]]
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]]
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2
00:28:07.755  No valid GPT data, bailing
00:28:07.755    19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt=
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme*
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]]
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]]
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt
00:28:07.755   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3
00:28:08.016  No valid GPT data, bailing
00:28:08.016    19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt=
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme*
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]]
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]]
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1
00:28:08.016  No valid GPT data, bailing
00:28:08.016    19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt=
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]]
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -a 10.0.0.1 -t tcp -s 4420
00:28:08.016  
00:28:08.016  Discovery Log Number of Records 2, Generation counter 2
00:28:08.016  =====Discovery Log Entry 0======
00:28:08.016  trtype:  tcp
00:28:08.016  adrfam:  ipv4
00:28:08.016  subtype: current discovery subsystem
00:28:08.016  treq:    not specified, sq flow control disable supported
00:28:08.016  portid:  1
00:28:08.016  trsvcid: 4420
00:28:08.016  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:28:08.016  traddr:  10.0.0.1
00:28:08.016  eflags:  none
00:28:08.016  sectype: none
00:28:08.016  =====Discovery Log Entry 1======
00:28:08.016  trtype:  tcp
00:28:08.016  adrfam:  ipv4
00:28:08.016  subtype: nvme subsystem
00:28:08.016  treq:    not specified, sq flow control disable supported
00:28:08.016  portid:  1
00:28:08.016  trsvcid: 4420
00:28:08.016  subnqn:  nqn.2024-02.io.spdk:cnode0
00:28:08.016  traddr:  10.0.0.1
00:28:08.016  eflags:  none
00:28:08.016  sectype: none
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:08.016   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:28:08.275   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:08.275   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==: ]]
00:28:08.275   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:08.275    19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=,
00:28:08.275    19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512
00:28:08.275    19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=,
00:28:08.275    19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:28:08.275   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1
00:28:08.275   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:08.275   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512
00:28:08.275   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:28:08.275   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:28:08.275   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:08.275   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:28:08.275   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:08.275   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:08.275   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:08.275    19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:08.275    19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:08.275    19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:08.275    19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:08.275    19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:08.275    19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:08.275    19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:08.275    19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:08.275    19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:08.275    19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:08.275    19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:08.275   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:28:08.275   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:08.275   19:12:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:08.275  nvme0n1
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:08.275    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:08.275    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:08.275    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:08.275    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:08.275    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}"
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=: ]]
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:08.275   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:08.275    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:08.275    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:08.275    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:08.275    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:08.275    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:08.534    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:08.534    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:08.534    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:08.534    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:08.534    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:08.534    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:08.534   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:28:08.534   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:08.534   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:08.534  nvme0n1
00:28:08.534   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:08.534    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:08.534    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:08.534    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:08.534    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:08.534    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:08.534   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:08.534   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:08.534   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:08.534   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:08.534   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:08.534   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:08.534   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1
00:28:08.534   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:08.534   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:08.534   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:28:08.534   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:28:08.534   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:08.535   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:08.535   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:08.535   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:28:08.535   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:08.535   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==: ]]
00:28:08.535   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:08.535   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1
00:28:08.535   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:08.535   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:08.535   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:28:08.535   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:28:08.535   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:08.535   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:28:08.535   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:08.535   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:08.535   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:08.535    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:08.535    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:08.535    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:08.535    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:08.535    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:08.535    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:08.535    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:08.535    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:08.535    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:08.535    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:08.535    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:08.535   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:28:08.535   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:08.535   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:08.794  nvme0n1
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:08.794    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:08.794    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:08.794    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:08.794    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:08.794    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd: ]]
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:08.794    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:08.794    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:08.794    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:08.794    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:08.794    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:08.794    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:08.794    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:08.794    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:08.794    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:08.794    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:08.794    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:08.794  nvme0n1
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:08.794    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:08.794    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:08.794    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:08.794    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:08.794    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:08.794   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms: ]]
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:09.053   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:09.053    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:09.053    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:09.054  nvme0n1
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:09.054    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:09.054   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:09.313  nvme0n1
00:28:09.313   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:09.313    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:09.313    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:09.313    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:09.313    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:09.313    19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:09.313   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:09.313   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:09.313   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:09.313   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:09.313   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:09.313   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:28:09.313   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:09.313   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0
00:28:09.313   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:09.313   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:09.313   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:28:09.313   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:28:09.313   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:09.313   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:09.313   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:09.313   19:12:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:28:09.572   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:09.572   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=: ]]
00:28:09.572   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:09.572   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0
00:28:09.572   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:09.572   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:09.572   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:28:09.572   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:28:09.572   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:09.572   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:28:09.572   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:09.572   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:09.572   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:09.572    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:09.572    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:09.572    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:09.572    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:09.572    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:09.572    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:09.572    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:09.572    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:09.572    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:09.572    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:09.572    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:09.572   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:28:09.572   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:09.572   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:09.831  nvme0n1
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:09.831    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:09.831    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:09.831    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:09.831    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:09.831    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==: ]]
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:09.831    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:09.831    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:09.831    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:09.831    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:09.831    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:09.831    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:09.831    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:09.831    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:09.831    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:09.831    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:09.831    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:09.831  nvme0n1
00:28:09.831   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:09.831    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:09.831    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:09.831    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:09.831    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:09.831    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd: ]]
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:10.090   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:10.091  nvme0n1
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms: ]]
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:10.091    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:10.091   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:10.350  nvme0n1
00:28:10.350   19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:10.350    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:10.350    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:10.350    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:10.350    19:12:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:10.350    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:10.350    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:10.350    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:10.350    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:10.350    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:10.350    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:10.350    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:10.350    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:10.350    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:10.350    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:10.350    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:10.350    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:10.350   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:10.609  nvme0n1
00:28:10.609   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:10.609    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:10.609    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:10.609    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:10.609    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:10.609    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:10.609   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:10.609   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:10.609   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:10.609   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:10.609   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:10.609   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:28:10.609   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:10.610   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0
00:28:10.610   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:10.610   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:10.610   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:28:10.610   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:28:10.610   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:10.610   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:10.610   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:10.610   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:28:11.176   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:11.176   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=: ]]
00:28:11.176   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:11.176   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0
00:28:11.176   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:11.176   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:11.176   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:28:11.176   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:28:11.176   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:11.176   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:28:11.176   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:11.176   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:11.176   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:11.176    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:11.176    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:11.177    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:11.177    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:11.177    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:11.177    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:11.177    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:11.177    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:11.177    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:11.177    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:11.177    19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:11.177   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:28:11.177   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:11.177   19:12:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:11.435  nvme0n1
00:28:11.435   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:11.435    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:11.435    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:11.435    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:11.435    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:11.435    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:11.435   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:11.435   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:11.435   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:11.435   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:11.435   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:11.435   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:11.435   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1
00:28:11.435   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:11.435   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:11.435   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:28:11.435   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:28:11.435   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:11.436   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:11.436   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:11.436   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:28:11.436   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:11.436   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==: ]]
00:28:11.436   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:11.436   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1
00:28:11.436   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:11.436   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:11.436   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:28:11.436   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:28:11.436   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:11.436   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:28:11.436   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:11.436   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:11.436   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:11.436    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:11.436    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:11.436    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:11.436    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:11.436    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:11.436    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:11.436    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:11.436    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:11.436    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:11.436    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:11.436    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:11.436   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:28:11.436   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:11.436   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:11.695  nvme0n1
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:11.695    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:11.695    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:11.695    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:11.695    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:11.695    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd: ]]
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:11.695    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:11.695    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:11.695    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:11.695    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:11.695    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:11.695    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:11.695    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:11.695    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:11.695    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:11.695    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:11.695    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:11.695   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:11.954  nvme0n1
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:11.954    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:11.954    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:11.954    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:11.954    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:11.954    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms: ]]
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:11.954    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:11.954    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:11.954    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:11.954    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:11.954    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:11.954    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:11.954    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:11.954    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:11.954    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:11.954    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:11.954    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:11.954   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:12.213  nvme0n1
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:12.213    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:12.213    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:12.213    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:12.213    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:12.213    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:12.213   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:12.213    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:12.213    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:12.213    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:12.213    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:12.213    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:12.213    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:12.213    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:12.213    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:12.213    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:12.214    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:12.214    19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:12.214   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:28:12.214   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:12.214   19:12:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:12.472  nvme0n1
00:28:12.472   19:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:12.472    19:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:12.472    19:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:12.472    19:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:12.472    19:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:12.472    19:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:12.472   19:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:12.472   19:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:12.472   19:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:12.472   19:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:12.472   19:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:12.472   19:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:28:12.472   19:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:12.472   19:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0
00:28:12.472   19:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:12.472   19:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:12.472   19:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:28:12.472   19:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:28:12.472   19:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:12.472   19:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:12.472   19:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:12.472   19:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:28:13.848   19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:13.848   19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=: ]]
00:28:13.848   19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:13.848   19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0
00:28:13.848   19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:13.848   19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:13.848   19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:28:13.848   19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:28:13.848   19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:13.848   19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:28:13.848   19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:13.848   19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:13.848   19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:13.848    19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:13.848    19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:13.848    19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:13.848    19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:13.848    19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:13.848    19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:13.848    19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:13.848    19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:13.848    19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:13.848    19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:13.848    19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:13.848   19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:28:13.848   19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:13.848   19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:14.415  nvme0n1
00:28:14.415   19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:14.415    19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:14.415    19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:14.415    19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:14.415    19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:14.415    19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:14.415   19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:14.415   19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:14.415   19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:14.415   19:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==: ]]
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:14.415    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:14.415    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:14.415    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:14.415    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:14.415    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:14.415    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:14.415    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:14.415    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:14.415    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:14.415    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:14.415    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:14.415   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:14.674  nvme0n1
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:14.674    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:14.674    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:14.674    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:14.674    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:14.674    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd: ]]
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:14.674   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:14.674    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:14.674    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:14.674    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:14.674    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:14.674    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:14.674    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:14.674    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:14.674    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:14.674    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:14.675    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:14.675    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:14.675   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:28:14.675   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:14.675   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:14.933  nvme0n1
00:28:14.933   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:14.933    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:14.933    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:14.933    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:14.933    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:14.933    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms: ]]
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:15.192    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:15.192    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:15.192    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:15.192    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:15.192    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:15.192    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:15.192    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:15.192    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:15.192    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:15.192    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:15.192    19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:15.192   19:12:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:15.451  nvme0n1
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:15.451    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:15.451    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:15.451    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:15.451    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:15.451    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:15.451    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:15.451    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:15.451    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:15.451    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:15.451    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:15.451    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:15.451    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:15.451    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:15.451    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:15.451    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:15.451    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:15.451   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:16.019  nvme0n1
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:16.019    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:16.019    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:16.019    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:16.019    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:16.019    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=: ]]
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:16.019    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:16.019    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:16.019    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:16.019    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:16.019    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:16.019    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:16.019    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:16.019    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:16.019    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:16.019    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:16.019    19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:16.019   19:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:16.587  nvme0n1
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:16.587    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:16.587    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:16.587    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:16.587    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:16.587    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==: ]]
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:16.587    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:16.587    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:16.587    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:16.587    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:16.587    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:16.587    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:16.587    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:16.587    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:16.587    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:16.587    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:16.587    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:16.587   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:17.154  nvme0n1
00:28:17.154   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:17.154    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:17.154    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:17.154    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:17.154    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:17.154    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:17.154   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:17.154   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:17.154   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:17.154   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:17.154   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:17.154   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:17.154   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2
00:28:17.154   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:17.154   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:17.154   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:28:17.154   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:28:17.154   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:17.154   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:17.154   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:17.154   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:28:17.155   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:17.155   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd: ]]
00:28:17.155   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:17.155   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2
00:28:17.155   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:17.155   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:17.155   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:28:17.155   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:28:17.155   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:17.155   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:28:17.155   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:17.155   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:17.155   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:17.155    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:17.155    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:17.155    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:17.155    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:17.155    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:17.155    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:17.155    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:17.155    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:17.155    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:17.155    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:17.155    19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:17.155   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:28:17.155   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:17.155   19:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:17.722  nvme0n1
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:17.722    19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:17.722    19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:17.722    19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:17.722    19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:17.722    19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms: ]]
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:17.722    19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:17.722    19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:17.722    19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:17.722    19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:17.722    19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:17.722    19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:17.722    19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:17.722    19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:17.722    19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:17.722    19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:17.722    19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:17.722   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:18.290  nvme0n1
00:28:18.290   19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:18.290    19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:18.290    19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:18.290    19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:18.290    19:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:18.290    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:18.290   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:18.290   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:18.290   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:18.290   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:18.290   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:18.290   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:18.290   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4
00:28:18.290   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:18.290   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:18.290   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:28:18.290   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:28:18.290   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:18.290   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:28:18.290   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:18.291   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:28:18.291   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:18.291   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:28:18.291   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4
00:28:18.291   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:18.291   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:28:18.291   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:28:18.291   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:28:18.291   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:18.291   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:28:18.291   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:18.291   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:18.291   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:18.291    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:18.291    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:18.291    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:18.291    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:18.291    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:18.291    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:18.291    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:18.291    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:18.291    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:18.291    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:18.291    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:18.291   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:28:18.291   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:18.291   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:18.857  nvme0n1
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:18.857    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:18.857    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:18.857    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:18.857    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:18.857    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}"
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=: ]]
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:18.857   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:19.115    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:19.115    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:19.115    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:19.115    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:19.115    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:19.115    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:19.115    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:19.115    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:19.115    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:19.115    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:19.115    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:19.115   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:28:19.115   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:19.115   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:19.115  nvme0n1
00:28:19.115   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:19.115    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:19.115    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:19.115    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:19.115    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:19.115    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==: ]]
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:19.116    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:19.116    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:19.116    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:19.116    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:19.116    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:19.116    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:19.116    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:19.116    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:19.116    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:19.116    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:19.116    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:19.116   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:19.374  nvme0n1
00:28:19.374   19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:19.374    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:19.374    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:19.374    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:19.374    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:19.374    19:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:19.374   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:19.374   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:19.374   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:19.374   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:19.374   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:19.374   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:19.374   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2
00:28:19.374   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:19.374   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:19.374   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:28:19.374   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:28:19.374   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:19.374   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:19.374   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:19.374   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:28:19.374   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:19.374   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd: ]]
00:28:19.374   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:19.374   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2
00:28:19.375   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:19.375   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:19.375   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:28:19.375   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:28:19.375   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:19.375   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:28:19.375   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:19.375   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:19.375   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:19.375    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:19.375    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:19.375    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:19.375    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:19.375    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:19.375    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:19.375    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:19.375    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:19.375    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:19.375    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:19.375    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:19.375   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:28:19.375   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:19.375   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:19.375  nvme0n1
00:28:19.375   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:19.375    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:19.375    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:19.375    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:19.375    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:19.375    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms: ]]
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:19.634   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:19.634    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:19.634    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:19.634    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:19.634    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:19.634    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:19.634    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:19.634    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:19.635    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:19.635    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:19.635    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:19.635    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:19.635  nvme0n1
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:19.635    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:19.635    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:19.635    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:19.635    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:19.635    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:19.635    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:19.635    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:19.635    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:19.635    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:19.635    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:19.635    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:19.635    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:19.635    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:19.635    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:19.635    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:19.635    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:19.635   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:19.894  nvme0n1
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:19.894    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:19.894    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:19.894    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:19.894    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:19.894    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=: ]]
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:19.894    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:19.894    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:19.894    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:19.894    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:19.894    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:19.894    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:19.894    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:19.894    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:19.894    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:19.894    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:19.894    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:19.894  nvme0n1
00:28:19.894   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:19.894    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:19.894    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:19.894    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:19.894    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:20.153    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==: ]]
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:20.153    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:20.153    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:20.153    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:20.153    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:20.153    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:20.153    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:20.153    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:20.153    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:20.153    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:20.153    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:20.153    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:20.153  nvme0n1
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:20.153    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:20.153    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:20.153    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:20.153    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:20.153    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:20.153   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:28:20.154   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:20.413   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd: ]]
00:28:20.413   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:20.413   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2
00:28:20.413   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:20.413   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:20.413   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:28:20.413   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:28:20.413   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:20.413   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:28:20.413   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:20.413   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:20.413   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:20.413    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:20.413    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:20.413    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:20.413    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:20.413    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:20.413    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:20.413    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:20.413    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:20.413    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:20.413    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:20.413    19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:20.413   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:28:20.413   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:20.413   19:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:20.413  nvme0n1
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:20.413    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:20.413    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:20.413    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:20.413    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:20.413    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms: ]]
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:20.413    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:20.413    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:20.413    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:20.413    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:20.413    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:20.413    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:20.413    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:20.413    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:20.413    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:20.413    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:20.413    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:20.413   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:20.672  nvme0n1
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:20.672    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:20.672    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:20.672    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:20.672    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:20.672    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:20.672   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:20.672    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:20.673    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:20.673    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:20.673    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:20.673    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:20.673    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:20.673    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:20.673    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:20.673    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:20.673    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:20.673    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:20.673   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:28:20.673   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:20.673   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:20.931  nvme0n1
00:28:20.931   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:20.931    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:20.931    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:20.931    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:20.931    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:20.931    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:20.931   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:20.931   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:20.931   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:20.931   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:20.931   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:20.931   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=: ]]
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:20.932    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:20.932    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:20.932    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:20.932    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:20.932    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:20.932    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:20.932    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:20.932    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:20.932    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:20.932    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:20.932    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:20.932   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:21.190  nvme0n1
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:21.190    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:21.190    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:21.190    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:21.190    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:21.190    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==: ]]
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:21.190    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:21.190    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:21.190    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:21.190    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:21.190    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:21.190    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:21.190    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:21.190    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:21.190    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:21.190    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:21.190    19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:21.190   19:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:21.449  nvme0n1
00:28:21.449   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:21.449    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:21.449    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:21.449    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:21.449    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:21.449    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:21.449   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:21.449   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:21.449   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:21.449   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:21.449   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:21.449   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:21.449   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2
00:28:21.449   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:21.449   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:21.449   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:28:21.449   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:28:21.449   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:21.449   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:21.449   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:21.449   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:28:21.449   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:21.449   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd: ]]
00:28:21.449   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:21.449   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2
00:28:21.450   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:21.450   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:21.450   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:28:21.450   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:28:21.450   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:21.450   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:28:21.450   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:21.450   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:21.450   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:21.450    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:21.450    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:21.450    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:21.450    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:21.450    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:21.450    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:21.450    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:21.450    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:21.450    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:21.450    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:21.450    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:21.450   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:28:21.450   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:21.450   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:21.450  nvme0n1
00:28:21.708   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:21.708    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:21.708    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:21.708    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:21.708    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:21.708    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:21.708   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:21.708   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:21.708   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:21.708   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:21.708   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:21.708   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:21.708   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3
00:28:21.708   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:21.708   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:21.708   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:28:21.708   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:28:21.708   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:21.708   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:21.708   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:21.708   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:28:21.708   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:21.709   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms: ]]
00:28:21.709   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:21.709   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3
00:28:21.709   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:21.709   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:21.709   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:28:21.709   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:28:21.709   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:21.709   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:28:21.709   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:21.709   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:21.709   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:21.709    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:21.709    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:21.709    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:21.709    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:21.709    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:21.709    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:21.709    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:21.709    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:21.709    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:21.709    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:21.709    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:21.709   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:28:21.709   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:21.709   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:21.709  nvme0n1
00:28:21.709   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:21.709    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:21.709    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:21.709    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:21.709    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:21.709    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:21.990   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:21.990   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:21.990   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:21.990   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:21.990   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:21.990   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:21.990   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4
00:28:21.990   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:21.990   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:21.990   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:28:21.990   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:28:21.990   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:21.990   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:28:21.990   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:21.990   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:28:21.990   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:21.990   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:28:21.990   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4
00:28:21.990   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:21.990   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:21.990   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:28:21.990   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:28:21.990   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:21.991   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:28:21.991   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:21.991   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:21.991   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:21.991    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:21.991    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:21.991    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:21.991    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:21.991    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:21.991    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:21.991    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:21.991    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:21.991    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:21.991    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:21.991    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:21.991   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:28:21.991   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:21.991   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:21.991  nvme0n1
00:28:21.991   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:21.991    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:21.991    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:21.991    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:21.991    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:21.991    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=: ]]
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:22.256    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:22.256    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:22.256    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:22.256    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:22.256    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:22.256    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:22.256    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:22.256    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:22.256    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:22.256    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:22.256    19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:22.256   19:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:22.515  nvme0n1
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:22.515    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:22.515    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:22.515    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:22.515    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:22.515    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==: ]]
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:22.515    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:22.515    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:22.515    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:22.515    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:22.515    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:22.515    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:22.515    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:22.515    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:22.515    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:22.515    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:22.515    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:22.515   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:22.773  nvme0n1
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:22.773    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:22.773    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:22.773    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:22.773    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:22.773    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd: ]]
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:22.773   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:23.031   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:23.031    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:23.031    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:23.031    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:23.031    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:23.031    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:23.031    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:23.031    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:23.031    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:23.031    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:23.031    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:23.031    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:23.031   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:28:23.031   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:23.031   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:23.289  nvme0n1
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:23.289    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:23.289    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:23.289    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:23.289    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:23.289    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms: ]]
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:23.289    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:23.289    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:23.289    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:23.289    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:23.289    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:23.289    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:23.289    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:23.289    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:23.289    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:23.289    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:23.289    19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:23.289   19:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:23.547  nvme0n1
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:23.547    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:23.547    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:23.547    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:23.547    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:23.547    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:23.547    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:23.547    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:23.547    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:23.547    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:23.547    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:23.547    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:23.547    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:23.547    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:23.547    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:23.547    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:23.547    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:23.547   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:24.113  nvme0n1
00:28:24.113   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:24.113    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:24.113    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:24.113    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:24.113    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:24.113    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:24.113   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:24.113   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:24.113   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:24.113   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:24.113   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:24.113   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:28:24.113   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:24.113   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0
00:28:24.113   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:24.113   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:24.113   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:28:24.113   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:28:24.113   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:24.113   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:24.113   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:24.113   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:28:24.113   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:24.114   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=: ]]
00:28:24.114   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:24.114   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0
00:28:24.114   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:24.114   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:24.114   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:28:24.114   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:28:24.114   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:24.114   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:28:24.114   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:24.114   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:24.114   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:24.114    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:24.114    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:24.114    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:24.114    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:24.114    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:24.114    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:24.114    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:24.114    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:24.114    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:24.114    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:24.114    19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:24.114   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:28:24.114   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:24.114   19:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:24.681  nvme0n1
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:24.681    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:24.681    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:24.681    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:24.681    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:24.681    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==: ]]
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:24.681    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:24.681    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:24.681    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:24.681    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:24.681    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:24.681    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:24.681    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:24.681    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:24.681    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:24.681    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:24.681    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:24.681   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:25.249  nvme0n1
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:25.249    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:25.249    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:25.249    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:25.249    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:25.249    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd: ]]
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:25.249    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:25.249    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:25.249    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:25.249    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:25.249    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:25.249    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:25.249    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:25.249    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:25.249    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:25.249    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:25.249    19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:25.249   19:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:25.817  nvme0n1
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:25.817    19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:25.817    19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:25.817    19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:25.817    19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:25.817    19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms: ]]
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:25.817    19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:25.817    19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:25.817    19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:25.817    19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:25.817    19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:25.817    19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:25.817    19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:25.817    19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:25.817    19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:25.817    19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:25.817    19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:25.817   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:26.384  nvme0n1
00:28:26.384   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:26.384    19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:26.384    19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:26.384    19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:26.384    19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:26.384    19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:26.384   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:26.384   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:26.384   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:26.384   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:26.384   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:26.384   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:26.385   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4
00:28:26.385   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:26.385   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:28:26.385   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:28:26.385   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:28:26.385   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:26.385   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:28:26.385   19:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:28:26.385   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:28:26.385   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:26.385   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:28:26.385   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4
00:28:26.385   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:26.385   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:28:26.385   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:28:26.385   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:28:26.385   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:26.385   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:28:26.385   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:26.385   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:26.385   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:26.385    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:26.385    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:26.385    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:26.385    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:26.385    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:26.385    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:26.385    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:26.385    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:26.385    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:26.385    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:26.385    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:26.385   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:28:26.385   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:26.385   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:26.953  nvme0n1
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}"
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=: ]]
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:26.953  nvme0n1
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==: ]]
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:26.953    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:26.953   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:27.212  nvme0n1
00:28:27.212   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:27.213    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:27.213    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:27.213    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:27.213    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:27.213    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd: ]]
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:27.213    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:27.213    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:27.213    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:27.213    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:27.213    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:27.213    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:27.213    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:27.213    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:27.213    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:27.213    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:27.213    19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:27.213   19:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:27.213  nvme0n1
00:28:27.213   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:27.472    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:27.472    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:27.472    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:27.472    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:27.472    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms: ]]
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:27.472   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:27.472    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:27.473    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:27.473    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:27.473    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:27.473    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:27.473    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:27.473    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:27.473    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:27.473    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:27.473    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:27.473    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:27.473  nvme0n1
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:27.473    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:27.473    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:27.473    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:27.473    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:27.473    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:27.473   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:27.732  nvme0n1
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=: ]]
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:27.732    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:27.732   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:27.992  nvme0n1
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:27.992    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:27.992    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:27.992    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:27.992    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:27.992    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==: ]]
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:27.992    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:27.992    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:27.992    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:27.992    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:27.992    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:27.992    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:27.992    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:27.992    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:27.992    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:27.992    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:27.992    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:27.992  nvme0n1
00:28:27.992   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:27.992    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:27.992    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:27.992    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:27.992    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:28.251    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd: ]]
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:28.251   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:28.251    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:28.251    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:28.251    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:28.251    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:28.251    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:28.251    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:28.251    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:28.251    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:28.251    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:28.251    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:28.251    19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:28.252   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:28:28.252   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:28.252   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:28.252  nvme0n1
00:28:28.252   19:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:28.252    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:28.252    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:28.252    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:28.252    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:28.252    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:28.252   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:28.252   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:28.252   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:28.252   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:28.252   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:28.252   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:28.252   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3
00:28:28.252   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:28.252   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:28.252   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:28:28.252   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:28:28.252   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:28.252   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:28.252   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:28.252   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:28:28.252   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:28.252   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms: ]]
00:28:28.252   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:28.511    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:28.511    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:28.511    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:28.511    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:28.511    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:28.511    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:28.511    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:28.511    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:28.511    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:28.511    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:28.511    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:28.511  nvme0n1
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:28.511    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:28.511    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:28.511    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:28.511    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:28.511    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:28:28.511   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:28:28.512   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:28.512   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:28:28.512   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:28.512   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:28.512   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:28.512    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:28.512    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:28.512    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:28.512    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:28.512    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:28.512    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:28.512    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:28.512    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:28.512    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:28.512    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:28.512    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:28.512   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:28:28.512   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:28.512   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:28.771  nvme0n1
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:28.771    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:28.771    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:28.771    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:28.771    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:28.771    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=: ]]
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:28.771    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:28.771    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:28.771    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:28.771    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:28.771    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:28.771    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:28.771    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:28.771    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:28.771    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:28.771    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:28.771    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:28.771   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:29.030  nvme0n1
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:29.030    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:29.030    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:29.030    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:29.030    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:29.030    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==: ]]
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:29.030    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:29.030    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:29.030    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:29.030    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:29.030    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:29.030    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:29.030    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:29.030    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:29.030    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:29.030    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:29.030    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:29.030   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:29.291  nvme0n1
00:28:29.291   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:29.291    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:29.291    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:29.291    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:29.291    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:29.291    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:29.291   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:29.291   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:29.291   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:29.291   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:29.291   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:29.291   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd: ]]
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:29.292    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:29.292    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:29.292    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:29.292    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:29.292    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:29.292    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:29.292    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:29.292    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:29.292    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:29.292    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:29.292    19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:29.292   19:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:29.550  nvme0n1
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:29.550    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:29.550    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:29.550    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:29.550    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:29.550    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms: ]]
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:29.550    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:29.550    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:29.550    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:29.550    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:29.550    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:29.550    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:29.550    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:29.550    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:29.550    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:29.550    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:29.550    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:29.550   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:29.809  nvme0n1
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:29.809    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:29.809    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:29.809    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:29.809    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:29.809    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:29.809    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:29.809    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:29.809    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:29.809    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:29.809    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:29.809    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:29.809    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:29.809    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:29.809    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:29.809    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:29.809    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:29.809   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:30.068  nvme0n1
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:30.068    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:30.068    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:30.068    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:30.068    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:30.068    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=: ]]
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:30.068   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:30.068    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:30.068    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:30.068    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:30.068    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:30.068    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:30.068    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:30.069    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:30.069    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:30.069    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:30.069    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:30.069    19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:30.069   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:28:30.069   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:30.069   19:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:30.328  nvme0n1
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:30.328    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:30.328    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:30.328    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:30.328    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:30.328    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==: ]]
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:30.328    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:30.328    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:30.328    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:30.328    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:30.328    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:30.328    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:30.328    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:30.328    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:30.328    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:30.328    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:30.328    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:30.328   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:30.896  nvme0n1
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:30.896    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:30.896    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:30.896    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:30.896    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:30.896    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd: ]]
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:30.896    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:30.896    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:30.896    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:30.896    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:30.896    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:30.896    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:30.896    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:30.896    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:30.896    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:30.896    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:30.896    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:30.896   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:31.156  nvme0n1
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:31.156    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:31.156    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:31.156    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:31.156    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:31.156    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms: ]]
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:31.156    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:31.156    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:31.156    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:31.156    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:31.156    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:31.156    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:31.156    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:31.156    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:31.156    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:31.156    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:31.156    19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:31.156   19:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:31.415  nvme0n1
00:28:31.415   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:31.415    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:31.415    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:31.415    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:31.415    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:31.415    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:31.673    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:31.673    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:31.673    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:31.673    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:31.673    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:31.673    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:31.673    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:31.673    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:31.673    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:31.673    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:31.673    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:31.673   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:31.932  nvme0n1
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:31.932    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:31.932    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:31.932    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:31.932    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:31.932    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY0ZWMwMWIyM2E0OGFhMzQ5YTM4Njg2N2Y0MDEwZDXoDg5D:
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=: ]]
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI0OTkzMDNkOGU1NjE5ZmZjNjJmNTYyMzFlZWQ3ZDIwZWY4OTc1ZWM1MjdkMGZiZDNhNjkxMTg5YmJiNWQ2ONlaKSg=:
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:31.932    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:31.932    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:31.932    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:31.932    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:31.932    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:31.932    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:31.932    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:31.932    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:31.932    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:31.932    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:31.932    19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:31.932   19:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:32.500  nvme0n1
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:32.500    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:32.500    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:32.500    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:32.500    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:32.500    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==: ]]
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:32.500    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:32.500    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:32.500    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:32.500    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:32.500    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:32.500    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:32.500    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:32.500    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:32.500    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:32.500    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:32.500    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:32.500   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:33.068  nvme0n1
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:33.068    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:33.068    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:33.068    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:33.068    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:33.068    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd: ]]
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:33.068   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:33.068    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:33.068    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:33.069    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:33.069    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:33.069    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:33.069    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:33.069    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:33.069    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:33.069    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:33.069    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:33.069    19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:33.069   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:28:33.069   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:33.069   19:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:33.636  nvme0n1
00:28:33.636   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:33.636    19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:33.636    19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:33.636    19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:33.636    19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:33.636    19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:33.636   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:33.636   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:33.636   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:33.636   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:33.636   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:33.636   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:33.636   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3
00:28:33.636   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:33.636   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:33.636   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:28:33.636   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:28:33.636   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:33.636   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:33.636   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:33.636   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:28:33.636   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmEwN2EwNWJiYTQxMGJiYWY2ZGFlNGFiMmIyZDllYTJjMDhhYzBmOTY2MjY1MWM1Xl2EhQ==:
00:28:33.636   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms: ]]
00:28:33.636   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VhNGNkNDM1NWU4ZjkyMWZlYmM0NDIzNGM0ZDVlOTWK7Yms:
00:28:33.895   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3
00:28:33.895   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:33.895   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:33.895   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:28:33.895   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:28:33.895   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:33.895   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:28:33.895   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:33.895   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:33.895   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:33.895    19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:33.896    19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:33.896    19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:33.896    19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:33.896    19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:33.896    19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:33.896    19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:33.896    19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:33.896    19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:33.896    19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:33.896    19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:33.896   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:28:33.896   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:33.896   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:34.154  nvme0n1
00:28:34.154   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:34.154    19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:34.154    19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:34.154    19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:34.154    19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:34.154    19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:34.413   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:34.413   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:34.413   19:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:34.413   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:34.413   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:34.413   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:28:34.413   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4
00:28:34.413   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:34.413   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:28:34.413   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:28:34.413   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:28:34.413   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:34.413   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:28:34.413   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:28:34.413   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:28:34.413   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjg4Y2YyZTQzZjZkZDVmZGE5YTBkNWFkYzA0YTg0NTk5ZmVmNjMyZDhmMGM2MTc1OWE0MDk3NGE5MDFmYjM3YcCJh5o=:
00:28:34.413   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:28:34.413   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4
00:28:34.413   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:28:34.413   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:28:34.414   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:28:34.414   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:28:34.414   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:28:34.414   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:28:34.414   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:34.414   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:34.414   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:34.414    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:28:34.414    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:34.414    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:34.414    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:34.414    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:34.414    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:34.414    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:34.414    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:34.414    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:34.414    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:34.414    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:34.414   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:28:34.414   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:34.414   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:34.982  nvme0n1
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==: ]]
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:34.982  2024/12/13 19:13:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error
00:28:34.982  request:
00:28:34.982  {
00:28:34.982  "method": "bdev_nvme_attach_controller",
00:28:34.982  "params": {
00:28:34.982  "name": "nvme0",
00:28:34.982  "trtype": "tcp",
00:28:34.982  "traddr": "10.0.0.1",
00:28:34.982  "adrfam": "ipv4",
00:28:34.982  "trsvcid": "4420",
00:28:34.982  "subnqn": "nqn.2024-02.io.spdk:cnode0",
00:28:34.982  "hostnqn": "nqn.2024-02.io.spdk:host0",
00:28:34.982  "prchk_reftag": false,
00:28:34.982  "prchk_guard": false,
00:28:34.982  "hdgst": false,
00:28:34.982  "ddgst": false,
00:28:34.982  "allow_unrecognized_csi": false
00:28:34.982  }
00:28:34.982  }
00:28:34.982  Got JSON-RPC error response
00:28:34.982  GoRPCClient: error on JSON-RPC call
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 ))
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:34.982    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:28:34.982   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:34.983   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2
00:28:34.983   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:34.983   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:34.983  request:
00:28:34.983  {
00:28:34.983  "method": "bdev_nvme_attach_controller",
00:28:34.983  "params": {
00:28:34.983  "name": "nvme0",
00:28:34.983  "trtype": "tcp",
00:28:34.983  "traddr": "10.0.0.1",
00:28:34.983  "adrfam": "ipv4",
00:28:34.983  "trsvcid": "4420",
00:28:34.983  "subnqn": "nqn.2024-02.io.spdk:cnode0",
00:28:34.983  "hostnqn": "nqn.2024-02.io.spdk:host0",
00:28:34.983  "prchk_reftag": false,
00:28:34.983  "prchk_guard": false,
00:28:34.983  "hdgst": false,
00:28:34.983  "ddgst": false,
00:28:34.983  "dhchap_key": "key2",
00:28:34.983  "allow_unrecognized_csi": false
00:28:34.983  }
00:28:34.983  }
00:28:34.983  2024/12/13 19:13:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error
00:28:34.983  Got JSON-RPC error response
00:28:34.983  GoRPCClient: error on JSON-RPC call
00:28:34.983   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:28:34.983   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1
00:28:34.983   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:28:34.983   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:28:34.983   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:28:34.983    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers
00:28:34.983    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length
00:28:34.983    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:34.983    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:34.983    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 ))
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:35.242  request:
00:28:35.242  {
00:28:35.242  "method": "bdev_nvme_attach_controller",
00:28:35.242  "params": {
00:28:35.242  "name": "nvme0",
00:28:35.242  "trtype": "tcp",
00:28:35.242  "traddr": "10.0.0.1",
00:28:35.242  "adrfam": "ipv4",
00:28:35.242  "trsvcid": "4420",
00:28:35.242  "subnqn": "nqn.2024-02.io.spdk:cnode0",
00:28:35.242  "hostnqn": "nqn.2024-02.io.spdk:host0",
00:28:35.242  "prchk_reftag": false,
00:28:35.242  "prchk_guard": false,
00:28:35.242  "hdgst": false,
00:28:35.242  "ddgst": false,
00:28:35.242  "dhchap_key": "key1",
00:28:35.242  "dhchap_ctrlr_key": "ckey2",
00:28:35.242  "allow_unrecognized_csi": false
00:28:35.242  }
00:28:35.242  }
00:28:35.242  Got JSON-RPC error response
00:28:35.242  2024/12/13 19:13:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error
00:28:35.242  GoRPCClient: error on JSON-RPC call
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:35.242  nvme0n1
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd: ]]
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:35.242   19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name'
00:28:35.242    19:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:35.242   19:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:35.242   19:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:28:35.242   19:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0
00:28:35.242   19:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:28:35.242   19:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:28:35.242   19:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:35.242    19:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:28:35.242   19:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:35.242   19:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:28:35.242   19:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:35.242   19:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:35.242  2024/12/13 19:13:07 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied
00:28:35.501  request:
00:28:35.501  {
00:28:35.501  "method": "bdev_nvme_set_keys",
00:28:35.501  "params": {
00:28:35.501  "name": "nvme0",
00:28:35.501  "dhchap_key": "key1",
00:28:35.501  "dhchap_ctrlr_key": "ckey2"
00:28:35.501  }
00:28:35.501  }
00:28:35.501  Got JSON-RPC error response
00:28:35.501  GoRPCClient: error on JSON-RPC call
00:28:35.501   19:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:28:35.501   19:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1
00:28:35.501   19:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:28:35.501   19:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:28:35.501   19:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:28:35.501    19:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers
00:28:35.501    19:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:35.501    19:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:35.501    19:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length
00:28:35.501    19:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:35.501   19:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 ))
00:28:35.501   19:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s
00:28:36.436    19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers
00:28:36.436    19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length
00:28:36.436    19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:36.436    19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:36.436    19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:36.436   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 ))
00:28:36.436   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1
00:28:36.436   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:36.436   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:36.436   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:28:36.436   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:28:36.436   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:36.436   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:36.436   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:36.436   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:28:36.436   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWU1ZmJlMGYwN2Y3MGU3NTFhN2NjZjY4M2ZkNGRkMGZkOWRmNmQ5YmRjNDJhMTk3lr1K/g==:
00:28:36.436   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==: ]]
00:28:36.436   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjViMDBhMGNhZmUwOTk5NTFmZjNlYzEzYzhmODYxODc3YTAyN2ZhODdhODE1OTg0SHWihg==:
00:28:36.436    19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip
00:28:36.436    19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:28:36.436    19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:28:36.436    19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:28:36.436    19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:28:36.436    19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:28:36.436    19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:28:36.436    19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:28:36.436    19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:28:36.436    19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:28:36.436    19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:28:36.436   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:28:36.436   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:36.436   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:36.696  nvme0n1
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWQ2YWNiMmZiNmY4NGFjNzZiMTMwOWRiZjZhNGJiZDOyK4DQ:
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd: ]]
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTY0Y2VjMzFlYjUyODQyZjc5ZjEzMjU2MDZkZmYxZjh8OXnd:
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:36.696    19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:36.696  2024/12/13 19:13:08 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied
00:28:36.696  request:
00:28:36.696  {
00:28:36.696  "method": "bdev_nvme_set_keys",
00:28:36.696  "params": {
00:28:36.696  "name": "nvme0",
00:28:36.696  "dhchap_key": "key2",
00:28:36.696  "dhchap_ctrlr_key": "ckey1"
00:28:36.696  }
00:28:36.696  }
00:28:36.696  Got JSON-RPC error response
00:28:36.696  GoRPCClient: error on JSON-RPC call
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:28:36.696    19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers
00:28:36.696    19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length
00:28:36.696    19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:36.696    19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:36.696    19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 ))
00:28:36.696   19:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s
00:28:37.631    19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers
00:28:37.631    19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length
00:28:37.631    19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:37.631    19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:37.631    19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:37.631   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 ))
00:28:37.631   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT
00:28:37.631   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup
00:28:37.631   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini
00:28:37.631   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup
00:28:37.631   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync
00:28:37.631   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:28:37.631   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e
00:28:37.631   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20}
00:28:37.631   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:28:37.631  rmmod nvme_tcp
00:28:37.631  rmmod nvme_fabrics
00:28:37.890   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:28:37.890   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e
00:28:37.890   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0
00:28:37.890   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 114210 ']'
00:28:37.890   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 114210
00:28:37.890   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 114210 ']'
00:28:37.890   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 114210
00:28:37.890    19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname
00:28:37.890   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:28:37.890    19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 114210
00:28:37.890  killing process with pid 114210
00:28:37.890   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:28:37.890   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:28:37.890   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 114210'
00:28:37.890   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 114210
00:28:37.890   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 114210
00:28:37.890   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:28:37.890   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:28:37.890   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:28:37.890   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr
00:28:37.890   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:28:37.890   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save
00:28:37.890   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore
00:28:37.891   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:28:37.891   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:28:37.891   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:28:37.891   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:28:37.891   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:28:37.891   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:28:38.150    19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]]
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*)
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet
00:28:38.150   19:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:28:39.086  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:28:39.086  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:28:39.086  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:28:39.086   19:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.CrY /tmp/spdk.key-null.7JM /tmp/spdk.key-sha256.FDj /tmp/spdk.key-sha384.H7A /tmp/spdk.key-sha512.C02 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log
00:28:39.086   19:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:28:39.344  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:28:39.605  0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver
00:28:39.605  0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver
00:28:39.605  
00:28:39.605  real	0m34.933s
00:28:39.605  user	0m32.136s
00:28:39.605  sys	0m3.867s
00:28:39.605   19:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable
00:28:39.605   19:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:28:39.605  ************************************
00:28:39.605  END TEST nvmf_auth_host
00:28:39.605  ************************************
00:28:39.605   19:13:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]]
00:28:39.605   19:13:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp
00:28:39.605   19:13:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:28:39.605   19:13:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:28:39.605   19:13:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:28:39.605  ************************************
00:28:39.605  START TEST nvmf_digest
00:28:39.605  ************************************
00:28:39.605   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp
00:28:39.605  * Looking for test storage...
00:28:39.605  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:28:39.605    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:28:39.605     19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:28:39.605     19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version
00:28:39.885    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:28:39.885    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:28:39.885    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l
00:28:39.885    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l
00:28:39.885    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-:
00:28:39.885    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1
00:28:39.885    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-:
00:28:39.885    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2
00:28:39.885    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<'
00:28:39.885    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2
00:28:39.885    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1
00:28:39.885    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:28:39.885    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in
00:28:39.885    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1
00:28:39.885    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 ))
00:28:39.885    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:28:39.885     19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1
00:28:39.885     19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1
00:28:39.885     19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:28:39.885     19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1
00:28:39.885    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1
00:28:39.885     19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2
00:28:39.886     19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2
00:28:39.886     19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:28:39.886     19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:28:39.886  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:39.886  		--rc genhtml_branch_coverage=1
00:28:39.886  		--rc genhtml_function_coverage=1
00:28:39.886  		--rc genhtml_legend=1
00:28:39.886  		--rc geninfo_all_blocks=1
00:28:39.886  		--rc geninfo_unexecuted_blocks=1
00:28:39.886  		
00:28:39.886  		'
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:28:39.886  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:39.886  		--rc genhtml_branch_coverage=1
00:28:39.886  		--rc genhtml_function_coverage=1
00:28:39.886  		--rc genhtml_legend=1
00:28:39.886  		--rc geninfo_all_blocks=1
00:28:39.886  		--rc geninfo_unexecuted_blocks=1
00:28:39.886  		
00:28:39.886  		'
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:28:39.886  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:39.886  		--rc genhtml_branch_coverage=1
00:28:39.886  		--rc genhtml_function_coverage=1
00:28:39.886  		--rc genhtml_legend=1
00:28:39.886  		--rc geninfo_all_blocks=1
00:28:39.886  		--rc geninfo_unexecuted_blocks=1
00:28:39.886  		
00:28:39.886  		'
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:28:39.886  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:39.886  		--rc genhtml_branch_coverage=1
00:28:39.886  		--rc genhtml_function_coverage=1
00:28:39.886  		--rc genhtml_legend=1
00:28:39.886  		--rc geninfo_all_blocks=1
00:28:39.886  		--rc geninfo_unexecuted_blocks=1
00:28:39.886  		
00:28:39.886  		'
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:28:39.886     19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:28:39.886     19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:28:39.886     19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob
00:28:39.886     19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:28:39.886     19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:28:39.886     19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:28:39.886      19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:39.886      19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:39.886      19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:39.886      19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH
00:28:39.886      19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:28:39.886  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]]
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:28:39.886    19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:28:39.886  Cannot find device "nvmf_init_br"
00:28:39.886   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:28:39.887  Cannot find device "nvmf_init_br2"
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:28:39.887  Cannot find device "nvmf_tgt_br"
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:28:39.887  Cannot find device "nvmf_tgt_br2"
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:28:39.887  Cannot find device "nvmf_init_br"
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:28:39.887  Cannot find device "nvmf_init_br2"
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:28:39.887  Cannot find device "nvmf_tgt_br"
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:28:39.887  Cannot find device "nvmf_tgt_br2"
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:28:39.887  Cannot find device "nvmf_br"
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:28:39.887  Cannot find device "nvmf_init_if"
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:28:39.887  Cannot find device "nvmf_init_if2"
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:28:39.887  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:28:39.887  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:28:39.887   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:28:40.153  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:28:40.153  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms
00:28:40.153  
00:28:40.153  --- 10.0.0.3 ping statistics ---
00:28:40.153  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:28:40.153  rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:28:40.153  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:28:40.153  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms
00:28:40.153  
00:28:40.153  --- 10.0.0.4 ping statistics ---
00:28:40.153  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:28:40.153  rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:28:40.153  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:28:40.153  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms
00:28:40.153  
00:28:40.153  --- 10.0.0.1 ping statistics ---
00:28:40.153  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:28:40.153  rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:28:40.153  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:28:40.153  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms
00:28:40.153  
00:28:40.153  --- 10.0.0.2 ping statistics ---
00:28:40.153  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:28:40.153  rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]]
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x
00:28:40.153  ************************************
00:28:40.153  START TEST nvmf_digest_clean
00:28:40.153  ************************************
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]]
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc")
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=115852
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 115852
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 115852 ']'
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:28:40.153  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:40.153   19:13:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:28:40.153  [2024-12-13 19:13:11.937663] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:28:40.153  [2024-12-13 19:13:11.938360] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:28:40.412  [2024-12-13 19:13:12.094399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:40.412  [2024-12-13 19:13:12.131331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:28:40.412  [2024-12-13 19:13:12.131406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:28:40.412  [2024-12-13 19:13:12.131421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:28:40.412  [2024-12-13 19:13:12.131431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:28:40.412  [2024-12-13 19:13:12.131440] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:28:40.412  [2024-12-13 19:13:12.131886] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:28:40.412   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:40.412   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0
00:28:40.412   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:28:40.412   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable
00:28:40.412   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:28:40.412   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:28:40.412   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]]
00:28:40.412   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config
00:28:40.412   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd
00:28:40.412   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:40.413   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:28:40.672  null0
00:28:40.672  [2024-12-13 19:13:12.352013] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:28:40.672  [2024-12-13 19:13:12.376171] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:28:40.672   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:40.672   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false
00:28:40.672   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa
00:28:40.672   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module
00:28:40.672   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread
00:28:40.672   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096
00:28:40.672   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128
00:28:40.672   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false
00:28:40.672   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=115884
00:28:40.672   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc
00:28:40.672   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 115884 /var/tmp/bperf.sock
00:28:40.672   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 115884 ']'
00:28:40.672   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:28:40.672   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:40.672  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:28:40.672   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:28:40.672   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:40.672   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:28:40.672  [2024-12-13 19:13:12.448542] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:28:40.672  [2024-12-13 19:13:12.448680] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115884 ]
00:28:40.930  [2024-12-13 19:13:12.604346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:40.930  [2024-12-13 19:13:12.640139] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:28:40.930   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:40.930   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0
00:28:40.930   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false
00:28:40.930   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init
00:28:40.930   19:13:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init
00:28:41.498   19:13:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:28:41.498   19:13:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:28:41.757  nvme0n1
00:28:41.757   19:13:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests
00:28:41.757   19:13:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:28:41.757  Running I/O for 2 seconds...
00:28:44.064      21847.00 IOPS,    85.34 MiB/s
[2024-12-13T19:13:15.888Z]     22227.00 IOPS,    86.82 MiB/s
00:28:44.064                                                                                                  Latency(us)
00:28:44.064  
[2024-12-13T19:13:15.888Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:44.064  Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096)
00:28:44.064  	 nvme0n1             :       2.01   22232.86      86.85       0.00     0.00    5748.91    2859.75   18111.77
00:28:44.064  
[2024-12-13T19:13:15.888Z]  ===================================================================================================================
00:28:44.064  
[2024-12-13T19:13:15.888Z]  Total                       :              22232.86      86.85       0.00     0.00    5748.91    2859.75   18111.77
00:28:44.064  {
00:28:44.064    "results": [
00:28:44.064      {
00:28:44.064        "job": "nvme0n1",
00:28:44.064        "core_mask": "0x2",
00:28:44.064        "workload": "randread",
00:28:44.064        "status": "finished",
00:28:44.064        "queue_depth": 128,
00:28:44.064        "io_size": 4096,
00:28:44.064        "runtime": 2.007524,
00:28:44.064        "iops": 22232.859980752408,
00:28:44.064        "mibps": 86.8471092998141,
00:28:44.064        "io_failed": 0,
00:28:44.064        "io_timeout": 0,
00:28:44.064        "avg_latency_us": 5748.905938410838,
00:28:44.064        "min_latency_us": 2859.7527272727275,
00:28:44.064        "max_latency_us": 18111.767272727273
00:28:44.064      }
00:28:44.064    ],
00:28:44.064    "core_count": 1
00:28:44.064  }
00:28:44.064   19:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed
00:28:44.064    19:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats
00:28:44.064    19:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats
00:28:44.064    19:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[]
00:28:44.064  			| select(.opcode=="crc32c")
00:28:44.064  			| "\(.module_name) \(.executed)"'
00:28:44.064    19:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats
00:28:44.064   19:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false
00:28:44.064   19:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software
00:28:44.064   19:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 ))
00:28:44.064   19:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:28:44.064   19:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 115884
00:28:44.064   19:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 115884 ']'
00:28:44.064   19:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 115884
00:28:44.064    19:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname
00:28:44.064   19:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:28:44.064    19:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115884
00:28:44.322   19:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:28:44.322   19:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:28:44.322  killing process with pid 115884
00:28:44.322   19:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115884'
00:28:44.322   19:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 115884
00:28:44.322  Received shutdown signal, test time was about 2.000000 seconds
00:28:44.322  
00:28:44.322                                                                                                  Latency(us)
00:28:44.322  
[2024-12-13T19:13:16.146Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:44.322  
[2024-12-13T19:13:16.146Z]  ===================================================================================================================
00:28:44.322  
[2024-12-13T19:13:16.146Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:28:44.322   19:13:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 115884
00:28:44.322   19:13:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false
00:28:44.322   19:13:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa
00:28:44.322   19:13:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module
00:28:44.322   19:13:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread
00:28:44.322   19:13:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072
00:28:44.322   19:13:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16
00:28:44.322   19:13:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false
00:28:44.322   19:13:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=115962
00:28:44.322   19:13:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 115962 /var/tmp/bperf.sock
00:28:44.322   19:13:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 115962 ']'
00:28:44.322   19:13:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc
00:28:44.322   19:13:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:28:44.322  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:28:44.322   19:13:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:44.322   19:13:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:28:44.322   19:13:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:44.322   19:13:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:28:44.322  [2024-12-13 19:13:16.140075] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:28:44.322  [2024-12-13 19:13:16.140201] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115962 ]
00:28:44.322  I/O size of 131072 is greater than zero copy threshold (65536).
00:28:44.322  Zero copy mechanism will not be used.
00:28:44.581  [2024-12-13 19:13:16.279539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:44.581  [2024-12-13 19:13:16.321625] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:28:45.517   19:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:45.517   19:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0
00:28:45.517   19:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false
00:28:45.517   19:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init
00:28:45.517   19:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init
00:28:45.775   19:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:28:45.775   19:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:28:46.034  nvme0n1
00:28:46.034   19:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests
00:28:46.034   19:13:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:28:46.034  I/O size of 131072 is greater than zero copy threshold (65536).
00:28:46.034  Zero copy mechanism will not be used.
00:28:46.034  Running I/O for 2 seconds...
00:28:48.346       9296.00 IOPS,  1162.00 MiB/s
[2024-12-13T19:13:20.170Z]      9278.50 IOPS,  1159.81 MiB/s
00:28:48.346                                                                                                  Latency(us)
00:28:48.346  
[2024-12-13T19:13:20.170Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:48.346  Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072)
00:28:48.346  	 nvme0n1             :       2.00    9276.90    1159.61       0.00     0.00    1721.37     513.86    4706.68
00:28:48.346  
[2024-12-13T19:13:20.170Z]  ===================================================================================================================
00:28:48.346  
[2024-12-13T19:13:20.170Z]  Total                       :               9276.90    1159.61       0.00     0.00    1721.37     513.86    4706.68
00:28:48.346  {
00:28:48.346    "results": [
00:28:48.346      {
00:28:48.346        "job": "nvme0n1",
00:28:48.346        "core_mask": "0x2",
00:28:48.346        "workload": "randread",
00:28:48.346        "status": "finished",
00:28:48.346        "queue_depth": 16,
00:28:48.346        "io_size": 131072,
00:28:48.346        "runtime": 2.002609,
00:28:48.346        "iops": 9276.898286185671,
00:28:48.346        "mibps": 1159.612285773209,
00:28:48.346        "io_failed": 0,
00:28:48.346        "io_timeout": 0,
00:28:48.346        "avg_latency_us": 1721.368462795682,
00:28:48.346        "min_latency_us": 513.8618181818182,
00:28:48.346        "max_latency_us": 4706.676363636364
00:28:48.346      }
00:28:48.346    ],
00:28:48.346    "core_count": 1
00:28:48.346  }
00:28:48.346   19:13:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed
00:28:48.347    19:13:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats
00:28:48.347    19:13:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats
00:28:48.347    19:13:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats
00:28:48.347    19:13:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[]
00:28:48.347  			| select(.opcode=="crc32c")
00:28:48.347  			| "\(.module_name) \(.executed)"'
00:28:48.347   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false
00:28:48.347   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software
00:28:48.347   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 ))
00:28:48.347   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:28:48.347   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 115962
00:28:48.347   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 115962 ']'
00:28:48.347   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 115962
00:28:48.347    19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname
00:28:48.347   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:28:48.347    19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115962
00:28:48.347   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:28:48.347   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:28:48.347   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115962'
00:28:48.347  killing process with pid 115962
00:28:48.347  Received shutdown signal, test time was about 2.000000 seconds
00:28:48.347  
00:28:48.347                                                                                                  Latency(us)
00:28:48.347  
[2024-12-13T19:13:20.171Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:48.347  
[2024-12-13T19:13:20.171Z]  ===================================================================================================================
00:28:48.347  
[2024-12-13T19:13:20.171Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:28:48.347   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 115962
00:28:48.347   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 115962
00:28:48.605   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false
00:28:48.605   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa
00:28:48.605   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module
00:28:48.605   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite
00:28:48.605   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096
00:28:48.605   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128
00:28:48.605   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false
00:28:48.605   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=116047
00:28:48.605   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 116047 /var/tmp/bperf.sock
00:28:48.605   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc
00:28:48.606   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 116047 ']'
00:28:48.606   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:28:48.606   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:48.606  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:28:48.606   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:28:48.606   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:48.606   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:28:48.606  [2024-12-13 19:13:20.365435] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:28:48.606  [2024-12-13 19:13:20.365544] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116047 ]
00:28:48.864  [2024-12-13 19:13:20.514534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:48.864  [2024-12-13 19:13:20.546787] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:28:48.864   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:48.864   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0
00:28:48.864   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false
00:28:48.864   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init
00:28:48.864   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init
00:28:49.123   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:28:49.123   19:13:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:28:49.690  nvme0n1
00:28:49.690   19:13:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests
00:28:49.690   19:13:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:28:49.690  Running I/O for 2 seconds...
00:28:51.562      27168.00 IOPS,   106.12 MiB/s
[2024-12-13T19:13:23.386Z]     27377.50 IOPS,   106.94 MiB/s
00:28:51.562                                                                                                  Latency(us)
00:28:51.562  
[2024-12-13T19:13:23.386Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:51.562  Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:28:51.562  	 nvme0n1             :       2.01   27387.45     106.98       0.00     0.00    4667.60    2517.18   10843.23
00:28:51.562  
[2024-12-13T19:13:23.386Z]  ===================================================================================================================
00:28:51.562  
[2024-12-13T19:13:23.386Z]  Total                       :              27387.45     106.98       0.00     0.00    4667.60    2517.18   10843.23
00:28:51.562  {
00:28:51.562    "results": [
00:28:51.562      {
00:28:51.562        "job": "nvme0n1",
00:28:51.562        "core_mask": "0x2",
00:28:51.562        "workload": "randwrite",
00:28:51.562        "status": "finished",
00:28:51.562        "queue_depth": 128,
00:28:51.562        "io_size": 4096,
00:28:51.562        "runtime": 2.007343,
00:28:51.562        "iops": 27387.446988382155,
00:28:51.562        "mibps": 106.98221479836779,
00:28:51.562        "io_failed": 0,
00:28:51.562        "io_timeout": 0,
00:28:51.562        "avg_latency_us": 4667.601210710128,
00:28:51.562        "min_latency_us": 2517.1781818181817,
00:28:51.562        "max_latency_us": 10843.229090909092
00:28:51.562      }
00:28:51.562    ],
00:28:51.562    "core_count": 1
00:28:51.562  }
00:28:51.821   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed
00:28:51.821    19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats
00:28:51.821    19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats
00:28:51.821    19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats
00:28:51.821    19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[]
00:28:51.821  			| select(.opcode=="crc32c")
00:28:51.821  			| "\(.module_name) \(.executed)"'
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 ))
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 116047
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 116047 ']'
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 116047
00:28:52.080    19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:28:52.080    19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116047
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:28:52.080  killing process with pid 116047
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116047'
00:28:52.080  Received shutdown signal, test time was about 2.000000 seconds
00:28:52.080  
00:28:52.080                                                                                                  Latency(us)
00:28:52.080  
[2024-12-13T19:13:23.904Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:52.080  
[2024-12-13T19:13:23.904Z]  ===================================================================================================================
00:28:52.080  
[2024-12-13T19:13:23.904Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 116047
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 116047
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=116125
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 116125 /var/tmp/bperf.sock
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 116125 ']'
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:52.080  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:52.080   19:13:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:28:52.338  [2024-12-13 19:13:23.946615] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:28:52.338  [2024-12-13 19:13:23.946730] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116125 ]
00:28:52.338  I/O size of 131072 is greater than zero copy threshold (65536).
00:28:52.338  Zero copy mechanism will not be used.
00:28:52.338  [2024-12-13 19:13:24.090190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:52.338  [2024-12-13 19:13:24.132097] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:28:53.274   19:13:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:53.274   19:13:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0
00:28:53.274   19:13:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false
00:28:53.274   19:13:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init
00:28:53.274   19:13:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init
00:28:53.531   19:13:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:28:53.531   19:13:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:28:53.789  nvme0n1
00:28:53.789   19:13:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests
00:28:53.789   19:13:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:28:54.048  I/O size of 131072 is greater than zero copy threshold (65536).
00:28:54.048  Zero copy mechanism will not be used.
00:28:54.048  Running I/O for 2 seconds...
00:28:55.919       7892.00 IOPS,   986.50 MiB/s
[2024-12-13T19:13:27.743Z]      7683.00 IOPS,   960.38 MiB/s
00:28:55.919                                                                                                  Latency(us)
00:28:55.919  
[2024-12-13T19:13:27.743Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:55.919  Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072)
00:28:55.919  	 nvme0n1             :       2.00    7680.97     960.12       0.00     0.00    2078.31    1385.19   11975.21
00:28:55.919  
[2024-12-13T19:13:27.743Z]  ===================================================================================================================
00:28:55.919  
[2024-12-13T19:13:27.743Z]  Total                       :               7680.97     960.12       0.00     0.00    2078.31    1385.19   11975.21
00:28:55.919  {
00:28:55.919    "results": [
00:28:55.919      {
00:28:55.919        "job": "nvme0n1",
00:28:55.919        "core_mask": "0x2",
00:28:55.919        "workload": "randwrite",
00:28:55.919        "status": "finished",
00:28:55.919        "queue_depth": 16,
00:28:55.919        "io_size": 131072,
00:28:55.919        "runtime": 2.003394,
00:28:55.919        "iops": 7680.9654017132925,
00:28:55.919        "mibps": 960.1206752141616,
00:28:55.919        "io_failed": 0,
00:28:55.919        "io_timeout": 0,
00:28:55.919        "avg_latency_us": 2078.3069348016165,
00:28:55.919        "min_latency_us": 1385.1927272727273,
00:28:55.919        "max_latency_us": 11975.214545454546
00:28:55.919      }
00:28:55.919    ],
00:28:55.919    "core_count": 1
00:28:55.919  }
00:28:55.919   19:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed
00:28:55.919    19:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats
00:28:55.919    19:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats
00:28:55.919    19:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[]
00:28:55.919  			| select(.opcode=="crc32c")
00:28:55.919  			| "\(.module_name) \(.executed)"'
00:28:55.919    19:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats
00:28:56.178   19:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false
00:28:56.178   19:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software
00:28:56.178   19:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 ))
00:28:56.178   19:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:28:56.178   19:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 116125
00:28:56.178   19:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 116125 ']'
00:28:56.178   19:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 116125
00:28:56.178    19:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname
00:28:56.178   19:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:28:56.178    19:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116125
00:28:56.178   19:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:28:56.178  killing process with pid 116125
00:28:56.178  Received shutdown signal, test time was about 2.000000 seconds
00:28:56.178  
00:28:56.178                                                                                                  Latency(us)
00:28:56.178  
[2024-12-13T19:13:28.002Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:56.178  
[2024-12-13T19:13:28.002Z]  ===================================================================================================================
00:28:56.178  
[2024-12-13T19:13:28.002Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:28:56.178   19:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:28:56.178   19:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116125'
00:28:56.178   19:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 116125
00:28:56.178   19:13:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 116125
00:28:56.438   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 115852
00:28:56.438   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 115852 ']'
00:28:56.438   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 115852
00:28:56.438    19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname
00:28:56.438   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:28:56.438    19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115852
00:28:56.438  killing process with pid 115852
00:28:56.438   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:28:56.438   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:28:56.438   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115852'
00:28:56.438   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 115852
00:28:56.438   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 115852
00:28:56.697  ************************************
00:28:56.697  END TEST nvmf_digest_clean
00:28:56.697  ************************************
00:28:56.697  
00:28:56.697  real	0m16.583s
00:28:56.697  user	0m31.665s
00:28:56.697  sys	0m4.590s
00:28:56.697   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable
00:28:56.697   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:28:56.697   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error
00:28:56.697   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:28:56.697   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:28:56.697   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x
00:28:56.697  ************************************
00:28:56.697  START TEST nvmf_digest_error
00:28:56.697  ************************************
00:28:56.697   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error
00:28:56.697   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc
00:28:56.697   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:28:56.697   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable
00:28:56.697   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:28:56.697   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=116237
00:28:56.697   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc
00:28:56.697   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 116237
00:28:56.697   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 116237 ']'
00:28:56.697   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:28:56.697   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:56.697  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:28:56.697   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:28:56.697   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:56.697   19:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:28:56.956  [2024-12-13 19:13:28.563922] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:28:56.956  [2024-12-13 19:13:28.564006] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:28:56.956  [2024-12-13 19:13:28.703307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:56.956  [2024-12-13 19:13:28.739508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:28:56.956  [2024-12-13 19:13:28.739569] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:28:56.956  [2024-12-13 19:13:28.739580] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:28:56.956  [2024-12-13 19:13:28.739587] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:28:56.956  [2024-12-13 19:13:28.739593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:28:56.956  [2024-12-13 19:13:28.739947] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:28:57.923   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:57.923   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0
00:28:57.923   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:28:57.923   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable
00:28:57.923   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:28:57.923   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:28:57.924   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error
00:28:57.924   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:57.924   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:28:57.924  [2024-12-13 19:13:29.472537] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error
00:28:57.924   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:57.924   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config
00:28:57.924   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd
00:28:57.924   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:57.924   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:28:57.924  null0
00:28:57.924  [2024-12-13 19:13:29.625020] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:28:57.924  [2024-12-13 19:13:29.649265] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:28:57.924   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:57.924   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128
00:28:57.924   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd
00:28:57.924   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread
00:28:57.924   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096
00:28:57.924   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128
00:28:57.924   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=116281
00:28:57.924  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:28:57.924   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z
00:28:57.924   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 116281 /var/tmp/bperf.sock
00:28:57.924   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 116281 ']'
00:28:57.924   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:28:57.924   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:57.924   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:28:57.924   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:57.924   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:28:57.924  [2024-12-13 19:13:29.705405] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:28:57.924  [2024-12-13 19:13:29.705489] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116281 ]
00:28:58.183  [2024-12-13 19:13:29.843340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:58.183  [2024-12-13 19:13:29.880995] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:28:58.183   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:58.183   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0
00:28:58.183   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:28:58.183   19:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:28:58.750   19:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable
00:28:58.750   19:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:58.750   19:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:28:58.750   19:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:58.750   19:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:28:58.750   19:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:28:58.750  nvme0n1
00:28:59.009   19:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256
00:28:59.009   19:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:59.009   19:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:28:59.009   19:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:59.009   19:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests
00:28:59.009   19:13:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:28:59.009  Running I/O for 2 seconds...
00:28:59.009  [2024-12-13 19:13:30.699856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.009  [2024-12-13 19:13:30.699921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.009  [2024-12-13 19:13:30.699936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.009  [2024-12-13 19:13:30.712023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.009  [2024-12-13 19:13:30.712064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.009  [2024-12-13 19:13:30.712092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.009  [2024-12-13 19:13:30.723164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.009  [2024-12-13 19:13:30.723204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.009  [2024-12-13 19:13:30.723258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.009  [2024-12-13 19:13:30.734517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.009  [2024-12-13 19:13:30.734556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.009  [2024-12-13 19:13:30.734584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.009  [2024-12-13 19:13:30.746614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.009  [2024-12-13 19:13:30.746654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.009  [2024-12-13 19:13:30.746681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.009  [2024-12-13 19:13:30.756497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.009  [2024-12-13 19:13:30.756536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.009  [2024-12-13 19:13:30.756563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.009  [2024-12-13 19:13:30.767833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.009  [2024-12-13 19:13:30.767871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.009  [2024-12-13 19:13:30.767900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.009  [2024-12-13 19:13:30.779102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.009  [2024-12-13 19:13:30.779137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.009  [2024-12-13 19:13:30.779165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.009  [2024-12-13 19:13:30.788902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.009  [2024-12-13 19:13:30.788938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.009  [2024-12-13 19:13:30.788966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.009  [2024-12-13 19:13:30.801662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.009  [2024-12-13 19:13:30.801698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.009  [2024-12-13 19:13:30.801745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.009  [2024-12-13 19:13:30.812860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.009  [2024-12-13 19:13:30.812894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.009  [2024-12-13 19:13:30.812921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.009  [2024-12-13 19:13:30.822666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.009  [2024-12-13 19:13:30.822699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.009  [2024-12-13 19:13:30.822727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.268  [2024-12-13 19:13:30.836300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.268  [2024-12-13 19:13:30.836338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.268  [2024-12-13 19:13:30.836365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.268  [2024-12-13 19:13:30.846339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.268  [2024-12-13 19:13:30.846372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.268  [2024-12-13 19:13:30.846400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.268  [2024-12-13 19:13:30.857787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.269  [2024-12-13 19:13:30.857837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.269  [2024-12-13 19:13:30.857864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.269  [2024-12-13 19:13:30.869606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.269  [2024-12-13 19:13:30.869643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.269  [2024-12-13 19:13:30.869672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.269  [2024-12-13 19:13:30.879650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.269  [2024-12-13 19:13:30.879704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.269  [2024-12-13 19:13:30.879732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.269  [2024-12-13 19:13:30.891576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.269  [2024-12-13 19:13:30.891612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.269  [2024-12-13 19:13:30.891640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.269  [2024-12-13 19:13:30.902671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.269  [2024-12-13 19:13:30.902707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.269  [2024-12-13 19:13:30.902734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.269  [2024-12-13 19:13:30.916010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.269  [2024-12-13 19:13:30.916047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.269  [2024-12-13 19:13:30.916076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.269  [2024-12-13 19:13:30.931198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.269  [2024-12-13 19:13:30.931290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.269  [2024-12-13 19:13:30.931306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.269  [2024-12-13 19:13:30.944694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.269  [2024-12-13 19:13:30.944730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.269  [2024-12-13 19:13:30.944758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.269  [2024-12-13 19:13:30.958245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.269  [2024-12-13 19:13:30.958291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.269  [2024-12-13 19:13:30.958319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.269  [2024-12-13 19:13:30.971112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.269  [2024-12-13 19:13:30.971148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.269  [2024-12-13 19:13:30.971176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.269  [2024-12-13 19:13:30.982297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.269  [2024-12-13 19:13:30.982331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.269  [2024-12-13 19:13:30.982358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.269  [2024-12-13 19:13:30.993082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.269  [2024-12-13 19:13:30.993117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.269  [2024-12-13 19:13:30.993145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.269  [2024-12-13 19:13:31.002905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.269  [2024-12-13 19:13:31.002940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.269  [2024-12-13 19:13:31.002967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.269  [2024-12-13 19:13:31.014446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.269  [2024-12-13 19:13:31.014479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.269  [2024-12-13 19:13:31.014507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.269  [2024-12-13 19:13:31.025956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.269  [2024-12-13 19:13:31.025994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.269  [2024-12-13 19:13:31.026038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.269  [2024-12-13 19:13:31.036495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.269  [2024-12-13 19:13:31.036558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.269  [2024-12-13 19:13:31.036578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.269  [2024-12-13 19:13:31.048122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.269  [2024-12-13 19:13:31.048171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.269  [2024-12-13 19:13:31.048208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.269  [2024-12-13 19:13:31.059503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.269  [2024-12-13 19:13:31.059549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.269  [2024-12-13 19:13:31.059569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.269  [2024-12-13 19:13:31.069438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.269  [2024-12-13 19:13:31.069486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.269  [2024-12-13 19:13:31.069522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.269  [2024-12-13 19:13:31.081752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.269  [2024-12-13 19:13:31.081800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.269  [2024-12-13 19:13:31.081835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.528  [2024-12-13 19:13:31.093223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.528  [2024-12-13 19:13:31.093299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.528  [2024-12-13 19:13:31.093334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.528  [2024-12-13 19:13:31.105462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.528  [2024-12-13 19:13:31.105510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.528  [2024-12-13 19:13:31.105546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.528  [2024-12-13 19:13:31.117630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.528  [2024-12-13 19:13:31.117678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.528  [2024-12-13 19:13:31.117738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.528  [2024-12-13 19:13:31.128943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.528  [2024-12-13 19:13:31.128991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.528  [2024-12-13 19:13:31.129027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.528  [2024-12-13 19:13:31.139070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.528  [2024-12-13 19:13:31.139123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.529  [2024-12-13 19:13:31.139144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.529  [2024-12-13 19:13:31.151029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.529  [2024-12-13 19:13:31.151077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.529  [2024-12-13 19:13:31.151113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.529  [2024-12-13 19:13:31.162204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.529  [2024-12-13 19:13:31.162268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.529  [2024-12-13 19:13:31.162304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.529  [2024-12-13 19:13:31.175013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.529  [2024-12-13 19:13:31.175061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.529  [2024-12-13 19:13:31.175096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.529  [2024-12-13 19:13:31.187524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.529  [2024-12-13 19:13:31.187586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.529  [2024-12-13 19:13:31.187606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.529  [2024-12-13 19:13:31.197187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.529  [2024-12-13 19:13:31.197261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.529  [2024-12-13 19:13:31.197282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.529  [2024-12-13 19:13:31.208600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.529  [2024-12-13 19:13:31.208648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.529  [2024-12-13 19:13:31.208668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.529  [2024-12-13 19:13:31.219936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.529  [2024-12-13 19:13:31.219980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.529  [2024-12-13 19:13:31.220016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.529  [2024-12-13 19:13:31.230723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.529  [2024-12-13 19:13:31.230766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.529  [2024-12-13 19:13:31.230784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.529  [2024-12-13 19:13:31.241127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.529  [2024-12-13 19:13:31.241171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.529  [2024-12-13 19:13:31.241190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.529  [2024-12-13 19:13:31.253654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.529  [2024-12-13 19:13:31.253701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.529  [2024-12-13 19:13:31.253745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.529  [2024-12-13 19:13:31.265556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.529  [2024-12-13 19:13:31.265599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.529  [2024-12-13 19:13:31.265618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.529  [2024-12-13 19:13:31.277404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.529  [2024-12-13 19:13:31.277451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.529  [2024-12-13 19:13:31.277472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.529  [2024-12-13 19:13:31.287072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.529  [2024-12-13 19:13:31.287116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.529  [2024-12-13 19:13:31.287135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.529  [2024-12-13 19:13:31.299469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.529  [2024-12-13 19:13:31.299509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.529  [2024-12-13 19:13:31.299527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.529  [2024-12-13 19:13:31.309332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.529  [2024-12-13 19:13:31.309374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.529  [2024-12-13 19:13:31.309393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.529  [2024-12-13 19:13:31.319663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.529  [2024-12-13 19:13:31.319706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.529  [2024-12-13 19:13:31.319725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.529  [2024-12-13 19:13:31.331497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.529  [2024-12-13 19:13:31.331544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.529  [2024-12-13 19:13:31.331563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.529  [2024-12-13 19:13:31.343619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.529  [2024-12-13 19:13:31.343661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.529  [2024-12-13 19:13:31.343679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.788  [2024-12-13 19:13:31.356036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.788  [2024-12-13 19:13:31.356078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.788  [2024-12-13 19:13:31.356097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.788  [2024-12-13 19:13:31.367728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.788  [2024-12-13 19:13:31.367772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.788  [2024-12-13 19:13:31.367792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.788  [2024-12-13 19:13:31.377569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.788  [2024-12-13 19:13:31.377616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.788  [2024-12-13 19:13:31.377635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.788  [2024-12-13 19:13:31.389554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.788  [2024-12-13 19:13:31.389601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.788  [2024-12-13 19:13:31.389620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.788  [2024-12-13 19:13:31.403288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.788  [2024-12-13 19:13:31.403334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.788  [2024-12-13 19:13:31.403354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.788  [2024-12-13 19:13:31.413127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.788  [2024-12-13 19:13:31.413177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.789  [2024-12-13 19:13:31.413196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.789  [2024-12-13 19:13:31.425467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.789  [2024-12-13 19:13:31.425517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.789  [2024-12-13 19:13:31.425535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.789  [2024-12-13 19:13:31.437173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.789  [2024-12-13 19:13:31.437231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.789  [2024-12-13 19:13:31.437251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.789  [2024-12-13 19:13:31.449282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.789  [2024-12-13 19:13:31.449328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.789  [2024-12-13 19:13:31.449348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.789  [2024-12-13 19:13:31.459423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.789  [2024-12-13 19:13:31.459471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.789  [2024-12-13 19:13:31.459490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.789  [2024-12-13 19:13:31.469587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.789  [2024-12-13 19:13:31.469635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.789  [2024-12-13 19:13:31.469654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.789  [2024-12-13 19:13:31.481294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.789  [2024-12-13 19:13:31.481341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.789  [2024-12-13 19:13:31.481360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.789  [2024-12-13 19:13:31.493163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.789  [2024-12-13 19:13:31.493212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.789  [2024-12-13 19:13:31.493242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.789  [2024-12-13 19:13:31.503903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.789  [2024-12-13 19:13:31.503950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.789  [2024-12-13 19:13:31.503970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.789  [2024-12-13 19:13:31.515422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.789  [2024-12-13 19:13:31.515469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.789  [2024-12-13 19:13:31.515489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.789  [2024-12-13 19:13:31.526824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.789  [2024-12-13 19:13:31.526873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.789  [2024-12-13 19:13:31.526893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.789  [2024-12-13 19:13:31.539191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.789  [2024-12-13 19:13:31.539256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.789  [2024-12-13 19:13:31.539276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.789  [2024-12-13 19:13:31.549143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.789  [2024-12-13 19:13:31.549192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.789  [2024-12-13 19:13:31.549210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.789  [2024-12-13 19:13:31.562135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.789  [2024-12-13 19:13:31.562182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.789  [2024-12-13 19:13:31.562201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.789  [2024-12-13 19:13:31.574273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.789  [2024-12-13 19:13:31.574318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.789  [2024-12-13 19:13:31.574353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.789  [2024-12-13 19:13:31.585274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.789  [2024-12-13 19:13:31.585320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.789  [2024-12-13 19:13:31.585340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.789  [2024-12-13 19:13:31.594709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.789  [2024-12-13 19:13:31.594756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.789  [2024-12-13 19:13:31.594775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:28:59.789  [2024-12-13 19:13:31.606547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:28:59.789  [2024-12-13 19:13:31.606609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:59.789  [2024-12-13 19:13:31.606645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.048  [2024-12-13 19:13:31.619659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.048  [2024-12-13 19:13:31.619707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.048  [2024-12-13 19:13:31.619727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.048  [2024-12-13 19:13:31.632723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.048  [2024-12-13 19:13:31.632779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.048  [2024-12-13 19:13:31.632806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.048  [2024-12-13 19:13:31.643925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.048  [2024-12-13 19:13:31.643972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.048  [2024-12-13 19:13:31.643991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.048  [2024-12-13 19:13:31.655386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.048  [2024-12-13 19:13:31.655433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.048  [2024-12-13 19:13:31.655452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.048  [2024-12-13 19:13:31.665700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.048  [2024-12-13 19:13:31.665794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.049  [2024-12-13 19:13:31.665816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.049  [2024-12-13 19:13:31.677949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.049  [2024-12-13 19:13:31.678011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.049  [2024-12-13 19:13:31.678030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.049      22123.00 IOPS,    86.42 MiB/s
[2024-12-13T19:13:31.873Z] [2024-12-13 19:13:31.690425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.049  [2024-12-13 19:13:31.690473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.049  [2024-12-13 19:13:31.690509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.049  [2024-12-13 19:13:31.701105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.049  [2024-12-13 19:13:31.701152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.049  [2024-12-13 19:13:31.701171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.049  [2024-12-13 19:13:31.715738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.049  [2024-12-13 19:13:31.715792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.049  [2024-12-13 19:13:31.715811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.049  [2024-12-13 19:13:31.729350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.049  [2024-12-13 19:13:31.729398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.049  [2024-12-13 19:13:31.729419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.049  [2024-12-13 19:13:31.742288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.049  [2024-12-13 19:13:31.742335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.049  [2024-12-13 19:13:31.742372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.049  [2024-12-13 19:13:31.754142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.049  [2024-12-13 19:13:31.754189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.049  [2024-12-13 19:13:31.754213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.049  [2024-12-13 19:13:31.767152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.049  [2024-12-13 19:13:31.767201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.049  [2024-12-13 19:13:31.767233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.049  [2024-12-13 19:13:31.779329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.049  [2024-12-13 19:13:31.779374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.049  [2024-12-13 19:13:31.779394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.049  [2024-12-13 19:13:31.790366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.049  [2024-12-13 19:13:31.790411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.049  [2024-12-13 19:13:31.790446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.049  [2024-12-13 19:13:31.801912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.049  [2024-12-13 19:13:31.801959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.049  [2024-12-13 19:13:31.801995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.049  [2024-12-13 19:13:31.813798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.049  [2024-12-13 19:13:31.813862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.049  [2024-12-13 19:13:31.813881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.049  [2024-12-13 19:13:31.825856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.049  [2024-12-13 19:13:31.825917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.049  [2024-12-13 19:13:31.825937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.049  [2024-12-13 19:13:31.836931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.049  [2024-12-13 19:13:31.836983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.049  [2024-12-13 19:13:31.837002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.049  [2024-12-13 19:13:31.847106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.049  [2024-12-13 19:13:31.847153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.049  [2024-12-13 19:13:31.847172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.049  [2024-12-13 19:13:31.859406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.049  [2024-12-13 19:13:31.859467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.049  [2024-12-13 19:13:31.859489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.308  [2024-12-13 19:13:31.871894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.308  [2024-12-13 19:13:31.871940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.308  [2024-12-13 19:13:31.871958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.308  [2024-12-13 19:13:31.881661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.308  [2024-12-13 19:13:31.881707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.308  [2024-12-13 19:13:31.881752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.308  [2024-12-13 19:13:31.894633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.308  [2024-12-13 19:13:31.894680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.308  [2024-12-13 19:13:31.894699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.308  [2024-12-13 19:13:31.904851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.308  [2024-12-13 19:13:31.904899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.308  [2024-12-13 19:13:31.904917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.308  [2024-12-13 19:13:31.917391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.308  [2024-12-13 19:13:31.917438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.308  [2024-12-13 19:13:31.917458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.308  [2024-12-13 19:13:31.930236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.308  [2024-12-13 19:13:31.930293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.308  [2024-12-13 19:13:31.930312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.308  [2024-12-13 19:13:31.942323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.308  [2024-12-13 19:13:31.942372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.308  [2024-12-13 19:13:31.942393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.308  [2024-12-13 19:13:31.956375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.308  [2024-12-13 19:13:31.956426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.308  [2024-12-13 19:13:31.956464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.308  [2024-12-13 19:13:31.970239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.308  [2024-12-13 19:13:31.970309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.308  [2024-12-13 19:13:31.970345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.308  [2024-12-13 19:13:31.981819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.308  [2024-12-13 19:13:31.981881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.308  [2024-12-13 19:13:31.981902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.308  [2024-12-13 19:13:31.993296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.309  [2024-12-13 19:13:31.993344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.309  [2024-12-13 19:13:31.993363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.309  [2024-12-13 19:13:32.005824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.309  [2024-12-13 19:13:32.005887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.309  [2024-12-13 19:13:32.005908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.309  [2024-12-13 19:13:32.016135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.309  [2024-12-13 19:13:32.016183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.309  [2024-12-13 19:13:32.016202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.309  [2024-12-13 19:13:32.027692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.309  [2024-12-13 19:13:32.027740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.309  [2024-12-13 19:13:32.027759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.309  [2024-12-13 19:13:32.038573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.309  [2024-12-13 19:13:32.038622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.309  [2024-12-13 19:13:32.038641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.309  [2024-12-13 19:13:32.049175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.309  [2024-12-13 19:13:32.049234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.309  [2024-12-13 19:13:32.049256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.309  [2024-12-13 19:13:32.061789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.309  [2024-12-13 19:13:32.061850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.309  [2024-12-13 19:13:32.061885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.309  [2024-12-13 19:13:32.071334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.309  [2024-12-13 19:13:32.071380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.309  [2024-12-13 19:13:32.071400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.309  [2024-12-13 19:13:32.083307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.309  [2024-12-13 19:13:32.083354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.309  [2024-12-13 19:13:32.083373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.309  [2024-12-13 19:13:32.095644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.309  [2024-12-13 19:13:32.095691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.309  [2024-12-13 19:13:32.095709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.309  [2024-12-13 19:13:32.107271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.309  [2024-12-13 19:13:32.107318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.309  [2024-12-13 19:13:32.107337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.309  [2024-12-13 19:13:32.117046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.309  [2024-12-13 19:13:32.117095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.309  [2024-12-13 19:13:32.117114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.309  [2024-12-13 19:13:32.128131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.309  [2024-12-13 19:13:32.128188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.309  [2024-12-13 19:13:32.128208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.568  [2024-12-13 19:13:32.140833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.568  [2024-12-13 19:13:32.140880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.568  [2024-12-13 19:13:32.140899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.568  [2024-12-13 19:13:32.150699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.568  [2024-12-13 19:13:32.150745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.568  [2024-12-13 19:13:32.150765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.568  [2024-12-13 19:13:32.162148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.568  [2024-12-13 19:13:32.162194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.568  [2024-12-13 19:13:32.162214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.568  [2024-12-13 19:13:32.174130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.568  [2024-12-13 19:13:32.174193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.568  [2024-12-13 19:13:32.174213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.568  [2024-12-13 19:13:32.186949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.568  [2024-12-13 19:13:32.187012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.568  [2024-12-13 19:13:32.187033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.568  [2024-12-13 19:13:32.198543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.568  [2024-12-13 19:13:32.198622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.568  [2024-12-13 19:13:32.198658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.568  [2024-12-13 19:13:32.210593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.568  [2024-12-13 19:13:32.210642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.568  [2024-12-13 19:13:32.210678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.568  [2024-12-13 19:13:32.222178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.568  [2024-12-13 19:13:32.222249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.568  [2024-12-13 19:13:32.222270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.568  [2024-12-13 19:13:32.234417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.568  [2024-12-13 19:13:32.234463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.568  [2024-12-13 19:13:32.234498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.568  [2024-12-13 19:13:32.246749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.568  [2024-12-13 19:13:32.246795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.568  [2024-12-13 19:13:32.246830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.568  [2024-12-13 19:13:32.257135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.568  [2024-12-13 19:13:32.257180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.568  [2024-12-13 19:13:32.257215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.568  [2024-12-13 19:13:32.268878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.568  [2024-12-13 19:13:32.268942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.568  [2024-12-13 19:13:32.268962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.568  [2024-12-13 19:13:32.279576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.568  [2024-12-13 19:13:32.279624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.568  [2024-12-13 19:13:32.279660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.568  [2024-12-13 19:13:32.291081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.568  [2024-12-13 19:13:32.291129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.568  [2024-12-13 19:13:32.291165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.568  [2024-12-13 19:13:32.302172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.568  [2024-12-13 19:13:32.302257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.568  [2024-12-13 19:13:32.302284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.568  [2024-12-13 19:13:32.313585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.568  [2024-12-13 19:13:32.313633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.568  [2024-12-13 19:13:32.313668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.568  [2024-12-13 19:13:32.326261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.568  [2024-12-13 19:13:32.326319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.568  [2024-12-13 19:13:32.326357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.568  [2024-12-13 19:13:32.336615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.568  [2024-12-13 19:13:32.336663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.568  [2024-12-13 19:13:32.336697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.568  [2024-12-13 19:13:32.347626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.568  [2024-12-13 19:13:32.347691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.568  [2024-12-13 19:13:32.347710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.569  [2024-12-13 19:13:32.357281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.569  [2024-12-13 19:13:32.357325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.569  [2024-12-13 19:13:32.357360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.569  [2024-12-13 19:13:32.369352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.569  [2024-12-13 19:13:32.369399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.569  [2024-12-13 19:13:32.369434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.569  [2024-12-13 19:13:32.381186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.569  [2024-12-13 19:13:32.381245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.569  [2024-12-13 19:13:32.381282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.391380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.391427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.391462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.402747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.402794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.402830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.415270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.415315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.415350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.425896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.425961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.425982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.439091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.439139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.439158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.448979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.449024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.449043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.460521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.460568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.460587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.471873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.471921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.471940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.482198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.482258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.482277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.491394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.491440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.491459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.503949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.503997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.504016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.515595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.515642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.515661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.525334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.525381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.525400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.538469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.538516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.538536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.549697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.549763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.549782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.560170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.560228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.560265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.570729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.570776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.570795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.581148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.581195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.581214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.591210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.591268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.591287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.603314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.603360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.603379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.614430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.614476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.614495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.624252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.624298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.624333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.635586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.635634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.635653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:00.828  [2024-12-13 19:13:32.648113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:00.828  [2024-12-13 19:13:32.648161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:00.828  [2024-12-13 19:13:32.648180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:01.088  [2024-12-13 19:13:32.660593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:01.088  [2024-12-13 19:13:32.660641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:01.088  [2024-12-13 19:13:32.660660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:01.088  [2024-12-13 19:13:32.671541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:01.088  [2024-12-13 19:13:32.671589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:01.088  [2024-12-13 19:13:32.671609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:01.088  [2024-12-13 19:13:32.683207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b958d0)
00:29:01.088  [2024-12-13 19:13:32.683264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:01.088  [2024-12-13 19:13:32.683284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:01.088      22093.00 IOPS,    86.30 MiB/s
00:29:01.088                                                                                                  Latency(us)
00:29:01.088  
[2024-12-13T19:13:32.912Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:29:01.088  Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096)
00:29:01.088  	 nvme0n1             :       2.01   22113.16      86.38       0.00     0.00    5779.95    3098.07   17039.36
00:29:01.088  
[2024-12-13T19:13:32.912Z]  ===================================================================================================================
00:29:01.088  
[2024-12-13T19:13:32.912Z]  Total                       :              22113.16      86.38       0.00     0.00    5779.95    3098.07   17039.36
00:29:01.088  {
00:29:01.088    "results": [
00:29:01.088      {
00:29:01.088        "job": "nvme0n1",
00:29:01.088        "core_mask": "0x2",
00:29:01.088        "workload": "randread",
00:29:01.088        "status": "finished",
00:29:01.088        "queue_depth": 128,
00:29:01.088        "io_size": 4096,
00:29:01.088        "runtime": 2.006271,
00:29:01.088        "iops": 22113.16417373326,
00:29:01.088        "mibps": 86.37954755364555,
00:29:01.088        "io_failed": 0,
00:29:01.088        "io_timeout": 0,
00:29:01.088        "avg_latency_us": 5779.9543453377455,
00:29:01.088        "min_latency_us": 3098.0654545454545,
00:29:01.088        "max_latency_us": 17039.36
00:29:01.088      }
00:29:01.088    ],
00:29:01.088    "core_count": 1
00:29:01.088  }
00:29:01.088    19:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1
00:29:01.088    19:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0]
00:29:01.088  			| .driver_specific
00:29:01.088  			| .nvme_error
00:29:01.088  			| .status_code
00:29:01.088  			| .command_transient_transport_error'
00:29:01.088    19:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1
00:29:01.088    19:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1
00:29:01.346   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 173 > 0 ))
00:29:01.346   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 116281
00:29:01.346   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 116281 ']'
00:29:01.346   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 116281
00:29:01.346    19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname
00:29:01.346   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:29:01.347    19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116281
00:29:01.347   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:29:01.347   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:29:01.347  killing process with pid 116281
00:29:01.347   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116281'
00:29:01.347   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 116281
00:29:01.347  Received shutdown signal, test time was about 2.000000 seconds
00:29:01.347  
00:29:01.347                                                                                                  Latency(us)
00:29:01.347  
[2024-12-13T19:13:33.171Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:29:01.347  
[2024-12-13T19:13:33.171Z]  ===================================================================================================================
00:29:01.347  
[2024-12-13T19:13:33.171Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:29:01.347   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 116281
00:29:01.605   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16
00:29:01.605   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd
00:29:01.605   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread
00:29:01.605   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072
00:29:01.605   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16
00:29:01.605   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=116354
00:29:01.605   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 116354 /var/tmp/bperf.sock
00:29:01.605   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z
00:29:01.605   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 116354 ']'
00:29:01.605   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:29:01.605   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100
00:29:01.605   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:29:01.605  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:29:01.605   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable
00:29:01.605   19:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:29:01.605  [2024-12-13 19:13:33.298196] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:29:01.605  I/O size of 131072 is greater than zero copy threshold (65536).
00:29:01.605  Zero copy mechanism will not be used.
00:29:01.605  [2024-12-13 19:13:33.298304] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116354 ]
00:29:01.864  [2024-12-13 19:13:33.436985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:29:01.864  [2024-12-13 19:13:33.483636] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:29:02.431   19:13:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:29:02.431   19:13:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0
00:29:02.431   19:13:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:29:02.431   19:13:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:29:02.689   19:13:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable
00:29:02.689   19:13:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:02.689   19:13:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:29:02.689   19:13:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:02.689   19:13:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:29:02.689   19:13:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:29:02.947  nvme0n1
00:29:03.207   19:13:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32
00:29:03.207   19:13:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:03.207   19:13:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:29:03.207   19:13:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:03.207   19:13:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests
00:29:03.207   19:13:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:29:03.207  I/O size of 131072 is greater than zero copy threshold (65536).
00:29:03.207  Zero copy mechanism will not be used.
00:29:03.207  Running I/O for 2 seconds...
00:29:03.207  [2024-12-13 19:13:34.883796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.207  [2024-12-13 19:13:34.883873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.207  [2024-12-13 19:13:34.883895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.207  [2024-12-13 19:13:34.887368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.207  [2024-12-13 19:13:34.887414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.207  [2024-12-13 19:13:34.887435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.207  [2024-12-13 19:13:34.891775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.207  [2024-12-13 19:13:34.891822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.207  [2024-12-13 19:13:34.891857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.207  [2024-12-13 19:13:34.896541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.207  [2024-12-13 19:13:34.896588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.207  [2024-12-13 19:13:34.896625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.207  [2024-12-13 19:13:34.901068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.207  [2024-12-13 19:13:34.901131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.207  [2024-12-13 19:13:34.901152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.207  [2024-12-13 19:13:34.904391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.207  [2024-12-13 19:13:34.904438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.207  [2024-12-13 19:13:34.904473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.207  [2024-12-13 19:13:34.908476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.207  [2024-12-13 19:13:34.908526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.207  [2024-12-13 19:13:34.908561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.207  [2024-12-13 19:13:34.913085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.207  [2024-12-13 19:13:34.913130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.207  [2024-12-13 19:13:34.913166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.207  [2024-12-13 19:13:34.917180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.207  [2024-12-13 19:13:34.917252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.207  [2024-12-13 19:13:34.917275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.207  [2024-12-13 19:13:34.921240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.207  [2024-12-13 19:13:34.921283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:34.921319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.208  [2024-12-13 19:13:34.925411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.208  [2024-12-13 19:13:34.925457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:34.925492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.208  [2024-12-13 19:13:34.928735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.208  [2024-12-13 19:13:34.928778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:34.928812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.208  [2024-12-13 19:13:34.932871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.208  [2024-12-13 19:13:34.932917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:34.932952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.208  [2024-12-13 19:13:34.937475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.208  [2024-12-13 19:13:34.937521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:34.937556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.208  [2024-12-13 19:13:34.940806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.208  [2024-12-13 19:13:34.940851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:34.940886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.208  [2024-12-13 19:13:34.945706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.208  [2024-12-13 19:13:34.945790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:34.945812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.208  [2024-12-13 19:13:34.950754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.208  [2024-12-13 19:13:34.950805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:34.950841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.208  [2024-12-13 19:13:34.956347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.208  [2024-12-13 19:13:34.956449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:34.956470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.208  [2024-12-13 19:13:34.961909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.208  [2024-12-13 19:13:34.961980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:34.962002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.208  [2024-12-13 19:13:34.965028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.208  [2024-12-13 19:13:34.965073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:34.965109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.208  [2024-12-13 19:13:34.969293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.208  [2024-12-13 19:13:34.969337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:34.969373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.208  [2024-12-13 19:13:34.973760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.208  [2024-12-13 19:13:34.973805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:34.973842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.208  [2024-12-13 19:13:34.978477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.208  [2024-12-13 19:13:34.978528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:34.978564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.208  [2024-12-13 19:13:34.983292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.208  [2024-12-13 19:13:34.983338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:34.983357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.208  [2024-12-13 19:13:34.986916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.208  [2024-12-13 19:13:34.986961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:34.986996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.208  [2024-12-13 19:13:34.991119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.208  [2024-12-13 19:13:34.991169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:34.991205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.208  [2024-12-13 19:13:34.995863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.208  [2024-12-13 19:13:34.995912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:34.995949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.208  [2024-12-13 19:13:35.001037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.208  [2024-12-13 19:13:35.001100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:35.001121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.208  [2024-12-13 19:13:35.006162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.208  [2024-12-13 19:13:35.006213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:35.006273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.208  [2024-12-13 19:13:35.011166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.208  [2024-12-13 19:13:35.011208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:35.011268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.208  [2024-12-13 19:13:35.015159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.208  [2024-12-13 19:13:35.015209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:35.015274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.208  [2024-12-13 19:13:35.021173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.208  [2024-12-13 19:13:35.021262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:35.021286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.208  [2024-12-13 19:13:35.025429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.208  [2024-12-13 19:13:35.025479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.208  [2024-12-13 19:13:35.025501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.467  [2024-12-13 19:13:35.030770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.467  [2024-12-13 19:13:35.030815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.467  [2024-12-13 19:13:35.030851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.467  [2024-12-13 19:13:35.035720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.467  [2024-12-13 19:13:35.035771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.467  [2024-12-13 19:13:35.035807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.467  [2024-12-13 19:13:35.041410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.467  [2024-12-13 19:13:35.041454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.467  [2024-12-13 19:13:35.041490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.467  [2024-12-13 19:13:35.045101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.467  [2024-12-13 19:13:35.045179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.467  [2024-12-13 19:13:35.045210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.467  [2024-12-13 19:13:35.049698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.467  [2024-12-13 19:13:35.049787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.467  [2024-12-13 19:13:35.049812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.467  [2024-12-13 19:13:35.054684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.467  [2024-12-13 19:13:35.054727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.467  [2024-12-13 19:13:35.054746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.467  [2024-12-13 19:13:35.058827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.467  [2024-12-13 19:13:35.058874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.467  [2024-12-13 19:13:35.058894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.467  [2024-12-13 19:13:35.062380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.467  [2024-12-13 19:13:35.062426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.467  [2024-12-13 19:13:35.062447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.467  [2024-12-13 19:13:35.066377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.467  [2024-12-13 19:13:35.066423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.467  [2024-12-13 19:13:35.066442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.467  [2024-12-13 19:13:35.069993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.467  [2024-12-13 19:13:35.070054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.467  [2024-12-13 19:13:35.070089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.467  [2024-12-13 19:13:35.074057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.467  [2024-12-13 19:13:35.074134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.467  [2024-12-13 19:13:35.074154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.467  [2024-12-13 19:13:35.078563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.467  [2024-12-13 19:13:35.078625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.467  [2024-12-13 19:13:35.078645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.467  [2024-12-13 19:13:35.081624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.467  [2024-12-13 19:13:35.081666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.467  [2024-12-13 19:13:35.081700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.467  [2024-12-13 19:13:35.085804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.467  [2024-12-13 19:13:35.085850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.467  [2024-12-13 19:13:35.085885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.467  [2024-12-13 19:13:35.090785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.467  [2024-12-13 19:13:35.090838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.467  [2024-12-13 19:13:35.090857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.467  [2024-12-13 19:13:35.095079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.467  [2024-12-13 19:13:35.095125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.467  [2024-12-13 19:13:35.095143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.467  [2024-12-13 19:13:35.098481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.098525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.098544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.102734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.102779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.102799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.107297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.107343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.107362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.111681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.111727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.111745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.114996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.115041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.115060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.119262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.119308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.119327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.123614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.123655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.123674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.126593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.126638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.126656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.130718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.130764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.130782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.134918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.134979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.134999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.138307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.138351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.138369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.142516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.142561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.142580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.146671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.146717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.146736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.150967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.151012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.151031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.154854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.154899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.154918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.158159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.158203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.158234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.162369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.162412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.162430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.166595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.166641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.166661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.169607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.169649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.169668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.173704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.173786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.173807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.178624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.178670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.178688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.182041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.182089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.182108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.186883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.186945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.186964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.191081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.191135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.191155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.195337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.195383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.195403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.199330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.199377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.199395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.203382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.468  [2024-12-13 19:13:35.203429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.468  [2024-12-13 19:13:35.203449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.468  [2024-12-13 19:13:35.207512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.469  [2024-12-13 19:13:35.207558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.469  [2024-12-13 19:13:35.207577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.469  [2024-12-13 19:13:35.211806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.469  [2024-12-13 19:13:35.211852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.469  [2024-12-13 19:13:35.211871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.469  [2024-12-13 19:13:35.215184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.469  [2024-12-13 19:13:35.215257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.469  [2024-12-13 19:13:35.215278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.469  [2024-12-13 19:13:35.219503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.469  [2024-12-13 19:13:35.219549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.469  [2024-12-13 19:13:35.219567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.469  [2024-12-13 19:13:35.223235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.469  [2024-12-13 19:13:35.223279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.469  [2024-12-13 19:13:35.223314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.469  [2024-12-13 19:13:35.227867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.469  [2024-12-13 19:13:35.227913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.469  [2024-12-13 19:13:35.227931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.469  [2024-12-13 19:13:35.231581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.469  [2024-12-13 19:13:35.231643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.469  [2024-12-13 19:13:35.231662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.469  [2024-12-13 19:13:35.236283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.469  [2024-12-13 19:13:35.236326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.469  [2024-12-13 19:13:35.236345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.469  [2024-12-13 19:13:35.240367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.469  [2024-12-13 19:13:35.240413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.469  [2024-12-13 19:13:35.240432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.469  [2024-12-13 19:13:35.244526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.469  [2024-12-13 19:13:35.244569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.469  [2024-12-13 19:13:35.244589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.469  [2024-12-13 19:13:35.248681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.469  [2024-12-13 19:13:35.248727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.469  [2024-12-13 19:13:35.248746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.469  [2024-12-13 19:13:35.253010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.469  [2024-12-13 19:13:35.253056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.469  [2024-12-13 19:13:35.253075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.469  [2024-12-13 19:13:35.257550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.469  [2024-12-13 19:13:35.257592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.469  [2024-12-13 19:13:35.257612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.469  [2024-12-13 19:13:35.260736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.469  [2024-12-13 19:13:35.260796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.469  [2024-12-13 19:13:35.260815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.469  [2024-12-13 19:13:35.265159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.469  [2024-12-13 19:13:35.265201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.469  [2024-12-13 19:13:35.265232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.469  [2024-12-13 19:13:35.270043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.469  [2024-12-13 19:13:35.270104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.469  [2024-12-13 19:13:35.270124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.469  [2024-12-13 19:13:35.273607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.469  [2024-12-13 19:13:35.273651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.469  [2024-12-13 19:13:35.273670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.469  [2024-12-13 19:13:35.278049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.469  [2024-12-13 19:13:35.278095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.469  [2024-12-13 19:13:35.278113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.469  [2024-12-13 19:13:35.282823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.469  [2024-12-13 19:13:35.282868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.469  [2024-12-13 19:13:35.282888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.469  [2024-12-13 19:13:35.286584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.469  [2024-12-13 19:13:35.286627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.469  [2024-12-13 19:13:35.286646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.291047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.291108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.291136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.296014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.296057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.296077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.299301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.299344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.299362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.303631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.303674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.303694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.308088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.308134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.308152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.312885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.312928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.312947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.315910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.315953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.315972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.320004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.320064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.320084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.323877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.323920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.323937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.327626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.327671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.327689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.332126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.332184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.332204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.336435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.336478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.336495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.340246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.340290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.340326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.345237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.345285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.345304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.349884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.349931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.349952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.353970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.354053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.354071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.359086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.359149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.359169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.363237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.363290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.363309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.366997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.367041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.367076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.371891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.371937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.371955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.376661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.376708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.376727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.381980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.382031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.382057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.387094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.387138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.387157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.390790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.390840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.390860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.395717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.395786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.395832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.400889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.400950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.400970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.729  [2024-12-13 19:13:35.406216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.729  [2024-12-13 19:13:35.406275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.729  [2024-12-13 19:13:35.406294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.410175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.410239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.410281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.414312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.414357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.414376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.418875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.418937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.418957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.423703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.423750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.423768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.428316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.428384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.428404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.432825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.432882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.432907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.437957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.438018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.438100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.442573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.442616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.442634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.446428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.446482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.446501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.450841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.450887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.450906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.456234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.456274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.456292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.460616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.460662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.460681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.465438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.465483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.465502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.469856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.469911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.469931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.475410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.475451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.475470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.479910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.479956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.479975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.483874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.483919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.483947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.488814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.488876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.488903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.494123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.494181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.494201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.498211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.498272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.498293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.502747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.502793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.502812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.508373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.508423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.508441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.512953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.512999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.513018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.517515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.517559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.517582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.522113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.522158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.522177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.527088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.527134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.527152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.530558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.530602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.730  [2024-12-13 19:13:35.530620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.730  [2024-12-13 19:13:35.535846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.730  [2024-12-13 19:13:35.535894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.731  [2024-12-13 19:13:35.535913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.731  [2024-12-13 19:13:35.540183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.731  [2024-12-13 19:13:35.540236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.731  [2024-12-13 19:13:35.540256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.731  [2024-12-13 19:13:35.545425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.731  [2024-12-13 19:13:35.545471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.731  [2024-12-13 19:13:35.545497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.731  [2024-12-13 19:13:35.549755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.731  [2024-12-13 19:13:35.549798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.731  [2024-12-13 19:13:35.549817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.991  [2024-12-13 19:13:35.554710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.991  [2024-12-13 19:13:35.554769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.991  [2024-12-13 19:13:35.554788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.991  [2024-12-13 19:13:35.559151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.991  [2024-12-13 19:13:35.559197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.991  [2024-12-13 19:13:35.559250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.991  [2024-12-13 19:13:35.563966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.991  [2024-12-13 19:13:35.564011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.991  [2024-12-13 19:13:35.564029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.991  [2024-12-13 19:13:35.567651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.991  [2024-12-13 19:13:35.567706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.991  [2024-12-13 19:13:35.567725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.991  [2024-12-13 19:13:35.571852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.991  [2024-12-13 19:13:35.571913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.991  [2024-12-13 19:13:35.571933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.991  [2024-12-13 19:13:35.576057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.991  [2024-12-13 19:13:35.576103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.991  [2024-12-13 19:13:35.576121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.991  [2024-12-13 19:13:35.581411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.991  [2024-12-13 19:13:35.581464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.991  [2024-12-13 19:13:35.581484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.991  [2024-12-13 19:13:35.586083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.991  [2024-12-13 19:13:35.586129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.991  [2024-12-13 19:13:35.586147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.991  [2024-12-13 19:13:35.590504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.991  [2024-12-13 19:13:35.590549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.991  [2024-12-13 19:13:35.590568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.991  [2024-12-13 19:13:35.595072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.991  [2024-12-13 19:13:35.595130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.991  [2024-12-13 19:13:35.595150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.991  [2024-12-13 19:13:35.600067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.991  [2024-12-13 19:13:35.600112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.991  [2024-12-13 19:13:35.600130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.991  [2024-12-13 19:13:35.604873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.991  [2024-12-13 19:13:35.604918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.991  [2024-12-13 19:13:35.604938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.991  [2024-12-13 19:13:35.608839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.991  [2024-12-13 19:13:35.608882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.991  [2024-12-13 19:13:35.608900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.991  [2024-12-13 19:13:35.612788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.991  [2024-12-13 19:13:35.612829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.991  [2024-12-13 19:13:35.612849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.991  [2024-12-13 19:13:35.617769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.991  [2024-12-13 19:13:35.617814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.991  [2024-12-13 19:13:35.617848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.991  [2024-12-13 19:13:35.622173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.991  [2024-12-13 19:13:35.622237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.991  [2024-12-13 19:13:35.622257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.991  [2024-12-13 19:13:35.627151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.991  [2024-12-13 19:13:35.627203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.991  [2024-12-13 19:13:35.627243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.991  [2024-12-13 19:13:35.631820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.991  [2024-12-13 19:13:35.631866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.991  [2024-12-13 19:13:35.631885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.991  [2024-12-13 19:13:35.636027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.991  [2024-12-13 19:13:35.636072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.991  [2024-12-13 19:13:35.636091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.991  [2024-12-13 19:13:35.640580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.991  [2024-12-13 19:13:35.640642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.991  [2024-12-13 19:13:35.640663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.645193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.645256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.645275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.648463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.648507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.648527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.653736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.653783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.653802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.658028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.658120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.658139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.663277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.663323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.663343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.668433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.668478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.668497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.673547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.673591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.673610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.677872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.677917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.677937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.682301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.682343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.682362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.686302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.686348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.686367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.690575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.690621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.690639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.695128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.695171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.695190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.699697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.699765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.699785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.704422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.704482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.704502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.708635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.708679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.708698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.713466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.713511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.713530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.717529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.717572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.717590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.721894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.721950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.721984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.726306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.726354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.726373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.731099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.731145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.731165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.735862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.735905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.735923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.739596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.739643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.739662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.744413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.744458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.744478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.749035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.749081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.749101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.752494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.752537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.752556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.757123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.757166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.757184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.762246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.762306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.992  [2024-12-13 19:13:35.762337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.992  [2024-12-13 19:13:35.766969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.992  [2024-12-13 19:13:35.767016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.993  [2024-12-13 19:13:35.767035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.993  [2024-12-13 19:13:35.771837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.993  [2024-12-13 19:13:35.771885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.993  [2024-12-13 19:13:35.771904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.993  [2024-12-13 19:13:35.776623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.993  [2024-12-13 19:13:35.776684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.993  [2024-12-13 19:13:35.776710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.993  [2024-12-13 19:13:35.781244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.993  [2024-12-13 19:13:35.781288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.993  [2024-12-13 19:13:35.781324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.993  [2024-12-13 19:13:35.785351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.993  [2024-12-13 19:13:35.785393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.993  [2024-12-13 19:13:35.785411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.993  [2024-12-13 19:13:35.790405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.993  [2024-12-13 19:13:35.790467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.993  [2024-12-13 19:13:35.790487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:03.993  [2024-12-13 19:13:35.793963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.993  [2024-12-13 19:13:35.794009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.993  [2024-12-13 19:13:35.794033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:03.993  [2024-12-13 19:13:35.799784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.993  [2024-12-13 19:13:35.799831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.993  [2024-12-13 19:13:35.799850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:03.993  [2024-12-13 19:13:35.804301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.993  [2024-12-13 19:13:35.804348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.993  [2024-12-13 19:13:35.804368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:03.993  [2024-12-13 19:13:35.808665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:03.993  [2024-12-13 19:13:35.808711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:03.993  [2024-12-13 19:13:35.808730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.253  [2024-12-13 19:13:35.812685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.253  [2024-12-13 19:13:35.812732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.253  [2024-12-13 19:13:35.812751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.253  [2024-12-13 19:13:35.817583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.253  [2024-12-13 19:13:35.817629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.253  [2024-12-13 19:13:35.817647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.253  [2024-12-13 19:13:35.821903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.253  [2024-12-13 19:13:35.821966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.253  [2024-12-13 19:13:35.821986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.253  [2024-12-13 19:13:35.826800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.253  [2024-12-13 19:13:35.826871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.253  [2024-12-13 19:13:35.826892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.253  [2024-12-13 19:13:35.831644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.253  [2024-12-13 19:13:35.831705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.253  [2024-12-13 19:13:35.831724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.253  [2024-12-13 19:13:35.835948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.253  [2024-12-13 19:13:35.835994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.253  [2024-12-13 19:13:35.836012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.253  [2024-12-13 19:13:35.840100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.253  [2024-12-13 19:13:35.840157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.253  [2024-12-13 19:13:35.840177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.253  [2024-12-13 19:13:35.845184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.253  [2024-12-13 19:13:35.845261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.253  [2024-12-13 19:13:35.845282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.253  [2024-12-13 19:13:35.850775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.253  [2024-12-13 19:13:35.850821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.850840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.854247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.854301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.854319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.858637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.858683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.858702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.863518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.863566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.863584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.868162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.868241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.868261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.872810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.872860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.872878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.877641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.877686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.877740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.254       6977.00 IOPS,   872.12 MiB/s
[2024-12-13T19:13:36.078Z] [2024-12-13 19:13:35.883002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.883048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.883067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.887522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.887569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.887588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.892311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.892353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.892384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.896409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.896480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.896501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.901371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.901418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.901437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.905978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.906023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.906072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.909964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.910025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.910059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.914841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.914887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.914907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.920493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.920538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.920557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.924346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.924389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.924409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.929049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.929102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.929120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.933306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.933348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.933383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.937945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.937990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.938026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.943738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.943799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.943818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.947919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.947966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.948001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.952612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.952660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.952678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.957239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.957281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.957301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.960415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.960461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.960480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.964713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.964760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.964779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.969628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.969674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.969695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.254  [2024-12-13 19:13:35.973563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.254  [2024-12-13 19:13:35.973606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.254  [2024-12-13 19:13:35.973626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.255  [2024-12-13 19:13:35.977071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.255  [2024-12-13 19:13:35.977115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.255  [2024-12-13 19:13:35.977150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.255  [2024-12-13 19:13:35.980829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.255  [2024-12-13 19:13:35.980887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.255  [2024-12-13 19:13:35.980906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.255  [2024-12-13 19:13:35.984127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.255  [2024-12-13 19:13:35.984169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.255  [2024-12-13 19:13:35.984188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.255  [2024-12-13 19:13:35.988338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.255  [2024-12-13 19:13:35.988383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.255  [2024-12-13 19:13:35.988402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.255  [2024-12-13 19:13:35.991902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.255  [2024-12-13 19:13:35.991947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.255  [2024-12-13 19:13:35.991966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.255  [2024-12-13 19:13:35.995893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.255  [2024-12-13 19:13:35.995938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.255  [2024-12-13 19:13:35.995957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.255  [2024-12-13 19:13:36.000228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.255  [2024-12-13 19:13:36.000271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.255  [2024-12-13 19:13:36.000290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.255  [2024-12-13 19:13:36.004501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.255  [2024-12-13 19:13:36.004548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.255  [2024-12-13 19:13:36.004568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.255  [2024-12-13 19:13:36.008832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.255  [2024-12-13 19:13:36.008877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.255  [2024-12-13 19:13:36.008895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.255  [2024-12-13 19:13:36.012243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.255  [2024-12-13 19:13:36.012284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.255  [2024-12-13 19:13:36.012302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.255  [2024-12-13 19:13:36.016724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.255  [2024-12-13 19:13:36.016786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.255  [2024-12-13 19:13:36.016807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.255  [2024-12-13 19:13:36.021395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.255  [2024-12-13 19:13:36.021458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.255  [2024-12-13 19:13:36.021478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.255  [2024-12-13 19:13:36.025136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.255  [2024-12-13 19:13:36.025207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.255  [2024-12-13 19:13:36.025277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.255  [2024-12-13 19:13:36.030134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.255  [2024-12-13 19:13:36.030200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.255  [2024-12-13 19:13:36.030221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.255  [2024-12-13 19:13:36.034963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.255  [2024-12-13 19:13:36.035008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.255  [2024-12-13 19:13:36.035029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.255  [2024-12-13 19:13:36.040380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.255  [2024-12-13 19:13:36.040428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.255  [2024-12-13 19:13:36.040449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.255  [2024-12-13 19:13:36.045202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.255  [2024-12-13 19:13:36.045308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.255  [2024-12-13 19:13:36.045330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.255  [2024-12-13 19:13:36.049747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.255  [2024-12-13 19:13:36.049804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.255  [2024-12-13 19:13:36.049825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.255  [2024-12-13 19:13:36.054074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.255  [2024-12-13 19:13:36.054146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.255  [2024-12-13 19:13:36.054166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.255  [2024-12-13 19:13:36.058895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.255  [2024-12-13 19:13:36.058942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.255  [2024-12-13 19:13:36.058978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.255  [2024-12-13 19:13:36.062715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.255  [2024-12-13 19:13:36.062788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.255  [2024-12-13 19:13:36.062808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.255  [2024-12-13 19:13:36.067201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.255  [2024-12-13 19:13:36.067275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.255  [2024-12-13 19:13:36.067311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.255  [2024-12-13 19:13:36.071575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.255  [2024-12-13 19:13:36.071619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.255  [2024-12-13 19:13:36.071655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.516  [2024-12-13 19:13:36.076602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.516  [2024-12-13 19:13:36.076677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.516  [2024-12-13 19:13:36.076713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.516  [2024-12-13 19:13:36.081262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.516  [2024-12-13 19:13:36.081307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.516  [2024-12-13 19:13:36.081343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.516  [2024-12-13 19:13:36.084642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.516  [2024-12-13 19:13:36.084686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.516  [2024-12-13 19:13:36.084720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.516  [2024-12-13 19:13:36.088824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.516  [2024-12-13 19:13:36.088871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.516  [2024-12-13 19:13:36.088907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.516  [2024-12-13 19:13:36.092930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.516  [2024-12-13 19:13:36.092973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.516  [2024-12-13 19:13:36.093008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.516  [2024-12-13 19:13:36.096635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.516  [2024-12-13 19:13:36.096678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.516  [2024-12-13 19:13:36.096714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.516  [2024-12-13 19:13:36.101126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.516  [2024-12-13 19:13:36.101169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.516  [2024-12-13 19:13:36.101203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.516  [2024-12-13 19:13:36.105643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.516  [2024-12-13 19:13:36.105689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.516  [2024-12-13 19:13:36.105756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.516  [2024-12-13 19:13:36.108538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.516  [2024-12-13 19:13:36.108587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.516  [2024-12-13 19:13:36.108622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.516  [2024-12-13 19:13:36.112575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.516  [2024-12-13 19:13:36.112635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.516  [2024-12-13 19:13:36.112655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.516  [2024-12-13 19:13:36.117286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.516  [2024-12-13 19:13:36.117348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.516  [2024-12-13 19:13:36.117368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.516  [2024-12-13 19:13:36.121186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.516  [2024-12-13 19:13:36.121260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.516  [2024-12-13 19:13:36.121281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.516  [2024-12-13 19:13:36.124425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.516  [2024-12-13 19:13:36.124472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.516  [2024-12-13 19:13:36.124492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.516  [2024-12-13 19:13:36.128619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.516  [2024-12-13 19:13:36.128665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.516  [2024-12-13 19:13:36.128702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.516  [2024-12-13 19:13:36.132197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.516  [2024-12-13 19:13:36.132277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.516  [2024-12-13 19:13:36.132298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.516  [2024-12-13 19:13:36.136025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.516  [2024-12-13 19:13:36.136072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.516  [2024-12-13 19:13:36.136106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.516  [2024-12-13 19:13:36.140714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.516  [2024-12-13 19:13:36.140760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.516  [2024-12-13 19:13:36.140796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.516  [2024-12-13 19:13:36.144285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.516  [2024-12-13 19:13:36.144345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.516  [2024-12-13 19:13:36.144365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.516  [2024-12-13 19:13:36.147395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.516  [2024-12-13 19:13:36.147456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.516  [2024-12-13 19:13:36.147476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.516  [2024-12-13 19:13:36.151598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.516  [2024-12-13 19:13:36.151662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.516  [2024-12-13 19:13:36.151684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.516  [2024-12-13 19:13:36.156138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.516  [2024-12-13 19:13:36.156197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.516  [2024-12-13 19:13:36.156216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.516  [2024-12-13 19:13:36.160255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.516  [2024-12-13 19:13:36.160298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.516  [2024-12-13 19:13:36.160334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.516  [2024-12-13 19:13:36.163435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.516  [2024-12-13 19:13:36.163497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.163517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.167740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.167802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.167821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.172509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.172557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.172578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.177031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.177074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.177110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.181573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.181619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.181656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.184617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.184662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.184698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.188982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.189045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.189065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.192914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.192958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.192992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.197020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.197062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.197097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.201230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.201272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.201308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.204856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.204901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.204936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.208369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.208412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.208447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.211961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.212022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.212042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.216046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.216108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.216127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.220807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.220850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.220885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.225530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.225574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.225609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.228852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.228912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.228934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.232851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.232913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.232934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.236324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.236367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.236402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.240011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.240073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.240092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.244684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.244728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.244762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.248242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.248285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.248321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.252316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.252376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.252395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.256141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.256201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.256221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.260147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.260208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.260228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.263090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.263155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.263175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.267888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.267931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.267966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.271082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.517  [2024-12-13 19:13:36.271129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.517  [2024-12-13 19:13:36.271163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.517  [2024-12-13 19:13:36.275436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.518  [2024-12-13 19:13:36.275482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.518  [2024-12-13 19:13:36.275516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.518  [2024-12-13 19:13:36.280031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.518  [2024-12-13 19:13:36.280093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.518  [2024-12-13 19:13:36.280112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.518  [2024-12-13 19:13:36.284523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.518  [2024-12-13 19:13:36.284566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.518  [2024-12-13 19:13:36.284601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.518  [2024-12-13 19:13:36.287696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.518  [2024-12-13 19:13:36.287742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.518  [2024-12-13 19:13:36.287776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.518  [2024-12-13 19:13:36.291961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.518  [2024-12-13 19:13:36.292019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.518  [2024-12-13 19:13:36.292039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.518  [2024-12-13 19:13:36.296461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.518  [2024-12-13 19:13:36.296506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.518  [2024-12-13 19:13:36.296542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.518  [2024-12-13 19:13:36.301130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.518  [2024-12-13 19:13:36.301173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.518  [2024-12-13 19:13:36.301191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.518  [2024-12-13 19:13:36.305464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.518  [2024-12-13 19:13:36.305509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.518  [2024-12-13 19:13:36.305530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.518  [2024-12-13 19:13:36.308572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.518  [2024-12-13 19:13:36.308629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.518  [2024-12-13 19:13:36.308664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.518  [2024-12-13 19:13:36.312639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.518  [2024-12-13 19:13:36.312702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.518  [2024-12-13 19:13:36.312722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.518  [2024-12-13 19:13:36.317173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.518  [2024-12-13 19:13:36.317229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.518  [2024-12-13 19:13:36.317250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.518  [2024-12-13 19:13:36.321612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.518  [2024-12-13 19:13:36.321655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.518  [2024-12-13 19:13:36.321674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.518  [2024-12-13 19:13:36.325442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.518  [2024-12-13 19:13:36.325485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.518  [2024-12-13 19:13:36.325504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.518  [2024-12-13 19:13:36.328366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.518  [2024-12-13 19:13:36.328410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.518  [2024-12-13 19:13:36.328444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.518  [2024-12-13 19:13:36.332475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.518  [2024-12-13 19:13:36.332521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.518  [2024-12-13 19:13:36.332551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.337391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.337454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.337480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.341729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.341798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.341818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.344923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.344965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.344984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.349552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.349594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.349615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.353964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.354029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.354064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.357131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.357173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.357191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.361126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.361185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.361205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.364997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.365040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.365075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.368660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.368703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.368722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.372788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.372837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.372856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.376737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.376798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.376817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.381212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.381270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.381290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.385673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.385759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.385780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.389251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.389309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.389331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.393583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.393628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.393647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.398092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.398136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.398173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.401174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.401241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.401261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.405264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.405305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.405339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.409207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.409276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.409297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.413626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.413667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.413701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.416898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.416943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.416977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.420277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.420318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.420337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.424993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.425035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.425053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.428348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.428390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.428408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.431911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.431956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.431974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.779  [2024-12-13 19:13:36.435505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.779  [2024-12-13 19:13:36.435550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.779  [2024-12-13 19:13:36.435569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.439388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.439433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.439452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.442950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.442996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.443016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.446937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.446983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.447002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.451210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.451267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.451286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.455560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.455605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.455623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.458530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.458576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.458595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.462983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.463029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.463048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.467531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.467577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.467595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.472026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.472087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.472107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.475199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.475273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.475292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.479152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.479199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.479229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.483188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.483246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.483266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.487096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.487142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.487161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.491089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.491151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.491170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.494437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.494481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.494500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.498579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.498624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.498643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.502552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.502597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.502615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.506925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.506971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.506990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.511263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.511308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.511326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.514980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.515022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.515040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.518146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.518190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.518208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.522122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.522168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.522187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.526730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.526776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.526796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.529847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.529895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.529914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.533755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.533799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.533835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.538152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.538197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.538216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.542593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.542638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.780  [2024-12-13 19:13:36.542657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.780  [2024-12-13 19:13:36.546848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.780  [2024-12-13 19:13:36.546894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.781  [2024-12-13 19:13:36.546914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.781  [2024-12-13 19:13:36.550966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.781  [2024-12-13 19:13:36.551012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.781  [2024-12-13 19:13:36.551030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.781  [2024-12-13 19:13:36.554142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.781  [2024-12-13 19:13:36.554185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.781  [2024-12-13 19:13:36.554204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.781  [2024-12-13 19:13:36.558209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.781  [2024-12-13 19:13:36.558265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.781  [2024-12-13 19:13:36.558284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.781  [2024-12-13 19:13:36.562918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.781  [2024-12-13 19:13:36.562964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.781  [2024-12-13 19:13:36.562983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.781  [2024-12-13 19:13:36.566766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.781  [2024-12-13 19:13:36.566811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.781  [2024-12-13 19:13:36.566830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.781  [2024-12-13 19:13:36.570189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.781  [2024-12-13 19:13:36.570247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.781  [2024-12-13 19:13:36.570268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.781  [2024-12-13 19:13:36.573350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.781  [2024-12-13 19:13:36.573391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.781  [2024-12-13 19:13:36.573410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.781  [2024-12-13 19:13:36.577898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.781  [2024-12-13 19:13:36.577946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.781  [2024-12-13 19:13:36.577967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.781  [2024-12-13 19:13:36.581902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.781  [2024-12-13 19:13:36.581972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.781  [2024-12-13 19:13:36.582008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:04.781  [2024-12-13 19:13:36.586229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.781  [2024-12-13 19:13:36.586272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.781  [2024-12-13 19:13:36.586291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:04.781  [2024-12-13 19:13:36.589502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.781  [2024-12-13 19:13:36.589544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.781  [2024-12-13 19:13:36.589578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:04.781  [2024-12-13 19:13:36.593632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.781  [2024-12-13 19:13:36.593674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.781  [2024-12-13 19:13:36.593708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:04.781  [2024-12-13 19:13:36.598126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:04.781  [2024-12-13 19:13:36.598183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:04.781  [2024-12-13 19:13:36.598204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:05.042  [2024-12-13 19:13:36.602267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.042  [2024-12-13 19:13:36.602321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.042  [2024-12-13 19:13:36.602341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:05.042  [2024-12-13 19:13:36.605614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.042  [2024-12-13 19:13:36.605664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.042  [2024-12-13 19:13:36.605682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:05.042  [2024-12-13 19:13:36.609892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.042  [2024-12-13 19:13:36.609940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.042  [2024-12-13 19:13:36.609961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:05.042  [2024-12-13 19:13:36.614636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.042  [2024-12-13 19:13:36.614682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.042  [2024-12-13 19:13:36.614701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:05.042  [2024-12-13 19:13:36.618908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.042  [2024-12-13 19:13:36.618954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.042  [2024-12-13 19:13:36.618973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:05.042  [2024-12-13 19:13:36.622206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.042  [2024-12-13 19:13:36.622265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.042  [2024-12-13 19:13:36.622284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:05.042  [2024-12-13 19:13:36.626214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.042  [2024-12-13 19:13:36.626271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.042  [2024-12-13 19:13:36.626291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:05.042  [2024-12-13 19:13:36.630452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.042  [2024-12-13 19:13:36.630497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.042  [2024-12-13 19:13:36.630517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:05.042  [2024-12-13 19:13:36.634154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.042  [2024-12-13 19:13:36.634200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.042  [2024-12-13 19:13:36.634231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:05.042  [2024-12-13 19:13:36.638071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.042  [2024-12-13 19:13:36.638132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.042  [2024-12-13 19:13:36.638152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:05.042  [2024-12-13 19:13:36.641811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.042  [2024-12-13 19:13:36.641857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.042  [2024-12-13 19:13:36.641893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:05.042  [2024-12-13 19:13:36.646081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.042  [2024-12-13 19:13:36.646128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.042  [2024-12-13 19:13:36.646146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:05.042  [2024-12-13 19:13:36.650744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.042  [2024-12-13 19:13:36.650791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.042  [2024-12-13 19:13:36.650810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:05.042  [2024-12-13 19:13:36.654334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.042  [2024-12-13 19:13:36.654372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.042  [2024-12-13 19:13:36.654389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:05.042  [2024-12-13 19:13:36.659153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.042  [2024-12-13 19:13:36.659204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.042  [2024-12-13 19:13:36.659237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:05.042  [2024-12-13 19:13:36.663882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.042  [2024-12-13 19:13:36.663928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.042  [2024-12-13 19:13:36.663947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:05.042  [2024-12-13 19:13:36.667676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.042  [2024-12-13 19:13:36.667738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.042  [2024-12-13 19:13:36.667761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:05.042  [2024-12-13 19:13:36.672027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.042  [2024-12-13 19:13:36.672073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.042  [2024-12-13 19:13:36.672107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.676673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.676719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.676737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.680373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.680417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.680435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.684702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.684749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.684768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.688559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.688605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.688623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.692403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.692448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.692466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.696261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.696306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.696325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.700026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.700071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.700089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.703552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.703597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.703617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.707774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.707820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.707839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.711280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.711325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.711344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.715363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.715408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.715427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.718820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.718879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.718899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.723114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.723160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.723179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.727759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.727805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.727826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.731005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.731052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.731071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.735377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.735421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.735439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.739997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.740043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.740062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.743375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.743420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.743454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.749353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.749414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.749433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.754327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.754373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.754407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.758505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.758567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.758588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.762984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.763031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.763065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.766712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.766761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.766796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.770517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.770564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.770600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.774422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.774470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.774504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.778492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.778539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.778573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.782873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.782936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.782956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:05.043  [2024-12-13 19:13:36.786596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.043  [2024-12-13 19:13:36.786659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.043  [2024-12-13 19:13:36.786678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:05.044  [2024-12-13 19:13:36.790613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.044  [2024-12-13 19:13:36.790677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.044  [2024-12-13 19:13:36.790696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:05.044  [2024-12-13 19:13:36.794378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.044  [2024-12-13 19:13:36.794425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.044  [2024-12-13 19:13:36.794460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:05.044  [2024-12-13 19:13:36.799215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.044  [2024-12-13 19:13:36.799285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.044  [2024-12-13 19:13:36.799305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:05.044  [2024-12-13 19:13:36.802940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.044  [2024-12-13 19:13:36.802988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.044  [2024-12-13 19:13:36.803023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:05.044  [2024-12-13 19:13:36.806946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.044  [2024-12-13 19:13:36.806994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.044  [2024-12-13 19:13:36.807028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:05.044  [2024-12-13 19:13:36.810830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.044  [2024-12-13 19:13:36.810892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.044  [2024-12-13 19:13:36.810912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:05.044  [2024-12-13 19:13:36.815101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.044  [2024-12-13 19:13:36.815163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.044  [2024-12-13 19:13:36.815182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:05.044  [2024-12-13 19:13:36.819329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.044  [2024-12-13 19:13:36.819375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.044  [2024-12-13 19:13:36.819410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:05.044  [2024-12-13 19:13:36.823123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.044  [2024-12-13 19:13:36.823185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.044  [2024-12-13 19:13:36.823204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:05.044  [2024-12-13 19:13:36.827039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.044  [2024-12-13 19:13:36.827102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.044  [2024-12-13 19:13:36.827121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:05.044  [2024-12-13 19:13:36.831173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.044  [2024-12-13 19:13:36.831247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.044  [2024-12-13 19:13:36.831268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:05.044  [2024-12-13 19:13:36.835043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.044  [2024-12-13 19:13:36.835091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.044  [2024-12-13 19:13:36.835126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:05.044  [2024-12-13 19:13:36.838989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.044  [2024-12-13 19:13:36.839052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.044  [2024-12-13 19:13:36.839072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:05.044  [2024-12-13 19:13:36.843642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.044  [2024-12-13 19:13:36.843705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.044  [2024-12-13 19:13:36.843725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:05.044  [2024-12-13 19:13:36.847311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.044  [2024-12-13 19:13:36.847373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.044  [2024-12-13 19:13:36.847393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:05.044  [2024-12-13 19:13:36.851425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.044  [2024-12-13 19:13:36.851487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.044  [2024-12-13 19:13:36.851506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:05.044  [2024-12-13 19:13:36.855171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.044  [2024-12-13 19:13:36.855243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.044  [2024-12-13 19:13:36.855263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:05.044  [2024-12-13 19:13:36.859046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.044  [2024-12-13 19:13:36.859106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.044  [2024-12-13 19:13:36.859126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:05.044  [2024-12-13 19:13:36.862844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.044  [2024-12-13 19:13:36.862893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.303  [2024-12-13 19:13:36.862928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:05.303  [2024-12-13 19:13:36.867522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.303  [2024-12-13 19:13:36.867584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.303  [2024-12-13 19:13:36.867604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:05.303  [2024-12-13 19:13:36.871472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.303  [2024-12-13 19:13:36.871521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.303  [2024-12-13 19:13:36.871556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:05.303  [2024-12-13 19:13:36.875204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.303  [2024-12-13 19:13:36.875272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.303  [2024-12-13 19:13:36.875291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:05.303       7279.50 IOPS,   909.94 MiB/s
[2024-12-13T19:13:37.127Z] [2024-12-13 19:13:36.881030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7b90)
00:29:05.303  [2024-12-13 19:13:36.881073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:05.303  [2024-12-13 19:13:36.881109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:05.303  
00:29:05.303                                                                                                  Latency(us)
00:29:05.303  
[2024-12-13T19:13:37.127Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:29:05.303  Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072)
00:29:05.303  	 nvme0n1             :       2.00    7278.31     909.79       0.00     0.00    2194.26     536.20    8579.26
00:29:05.303  
[2024-12-13T19:13:37.127Z]  ===================================================================================================================
00:29:05.303  
[2024-12-13T19:13:37.127Z]  Total                       :               7278.31     909.79       0.00     0.00    2194.26     536.20    8579.26
00:29:05.303  {
00:29:05.303    "results": [
00:29:05.303      {
00:29:05.303        "job": "nvme0n1",
00:29:05.303        "core_mask": "0x2",
00:29:05.303        "workload": "randread",
00:29:05.303        "status": "finished",
00:29:05.303        "queue_depth": 16,
00:29:05.303        "io_size": 131072,
00:29:05.303        "runtime": 2.00335,
00:29:05.303        "iops": 7278.308832705219,
00:29:05.303        "mibps": 909.7886040881524,
00:29:05.303        "io_failed": 0,
00:29:05.303        "io_timeout": 0,
00:29:05.303        "avg_latency_us": 2194.2619643246812,
00:29:05.303        "min_latency_us": 536.2036363636364,
00:29:05.303        "max_latency_us": 8579.258181818182
00:29:05.303      }
00:29:05.303    ],
00:29:05.303    "core_count": 1
00:29:05.303  }
00:29:05.303    19:13:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1
00:29:05.303    19:13:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1
00:29:05.303    19:13:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0]
00:29:05.303  			| .driver_specific
00:29:05.303  			| .nvme_error
00:29:05.303  			| .status_code
00:29:05.303  			| .command_transient_transport_error'
00:29:05.303    19:13:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 471 > 0 ))
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 116354
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 116354 ']'
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 116354
00:29:05.562    19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:29:05.562    19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116354
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:29:05.562  killing process with pid 116354
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116354'
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 116354
00:29:05.562  Received shutdown signal, test time was about 2.000000 seconds
00:29:05.562  
00:29:05.562                                                                                                  Latency(us)
00:29:05.562  
[2024-12-13T19:13:37.386Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:29:05.562  
[2024-12-13T19:13:37.386Z]  ===================================================================================================================
00:29:05.562  
[2024-12-13T19:13:37.386Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 116354
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=116444
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 116444 /var/tmp/bperf.sock
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 116444 ']'
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100
00:29:05.562  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable
00:29:05.562   19:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:29:05.821  [2024-12-13 19:13:37.428729] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:29:05.821  [2024-12-13 19:13:37.428849] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116444 ]
00:29:05.821  [2024-12-13 19:13:37.572897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:29:05.821  [2024-12-13 19:13:37.609055] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:29:06.756   19:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:29:06.756   19:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0
00:29:06.756   19:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:29:06.757   19:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:29:07.015   19:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable
00:29:07.015   19:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:07.015   19:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:29:07.015   19:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:07.015   19:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:29:07.015   19:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:29:07.273  nvme0n1
00:29:07.273   19:13:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256
00:29:07.273   19:13:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:07.273   19:13:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:29:07.273   19:13:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:07.273   19:13:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests
00:29:07.273   19:13:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:29:07.573  Running I/O for 2 seconds...
00:29:07.573  [2024-12-13 19:13:39.201070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef35f0
00:29:07.573  [2024-12-13 19:13:39.202220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.573  [2024-12-13 19:13:39.202336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:29:07.573  [2024-12-13 19:13:39.213247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee1f80
00:29:07.573  [2024-12-13 19:13:39.214938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.573  [2024-12-13 19:13:39.214983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:29:07.573  [2024-12-13 19:13:39.220521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee3060
00:29:07.573  [2024-12-13 19:13:39.221406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.573  [2024-12-13 19:13:39.221445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:29:07.573  [2024-12-13 19:13:39.232178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef46d0
00:29:07.573  [2024-12-13 19:13:39.233617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.573  [2024-12-13 19:13:39.233660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0
00:29:07.573  [2024-12-13 19:13:39.241250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef9b30
00:29:07.573  [2024-12-13 19:13:39.242503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.573  [2024-12-13 19:13:39.242550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:29:07.573  [2024-12-13 19:13:39.250729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef1430
00:29:07.573  [2024-12-13 19:13:39.251892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.573  [2024-12-13 19:13:39.251932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:29:07.573  [2024-12-13 19:13:39.262366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016edfdc0
00:29:07.573  [2024-12-13 19:13:39.264081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.573  [2024-12-13 19:13:39.264121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:29:07.573  [2024-12-13 19:13:39.269611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee5220
00:29:07.573  [2024-12-13 19:13:39.270581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.573  [2024-12-13 19:13:39.270621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:29:07.573  [2024-12-13 19:13:39.281324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef6890
00:29:07.573  [2024-12-13 19:13:39.282822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.573  [2024-12-13 19:13:39.282865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:29:07.573  [2024-12-13 19:13:39.290487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef7970
00:29:07.573  [2024-12-13 19:13:39.291805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.573  [2024-12-13 19:13:39.291845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:29:07.574  [2024-12-13 19:13:39.300978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee6300
00:29:07.574  [2024-12-13 19:13:39.302513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.574  [2024-12-13 19:13:39.302557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:29:07.574  [2024-12-13 19:13:39.310076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eeea00
00:29:07.574  [2024-12-13 19:13:39.311419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.574  [2024-12-13 19:13:39.311459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:29:07.574  [2024-12-13 19:13:39.319645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016edece0
00:29:07.574  [2024-12-13 19:13:39.320896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.574  [2024-12-13 19:13:39.320936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:29:07.574  [2024-12-13 19:13:39.329215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee5220
00:29:07.574  [2024-12-13 19:13:39.330411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.574  [2024-12-13 19:13:39.330456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0
00:29:07.574  [2024-12-13 19:13:39.338532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee01f8
00:29:07.574  [2024-12-13 19:13:39.339638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.574  [2024-12-13 19:13:39.339679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:29:07.574  [2024-12-13 19:13:39.348017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eddc00
00:29:07.574  [2024-12-13 19:13:39.349085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.574  [2024-12-13 19:13:39.349126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0
00:29:07.574  [2024-12-13 19:13:39.358311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee3060
00:29:07.574  [2024-12-13 19:13:39.359409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.574  [2024-12-13 19:13:39.359450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.367772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef9b30
00:29:07.837  [2024-12-13 19:13:39.368777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.837  [2024-12-13 19:13:39.368816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.379465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee6300
00:29:07.837  [2024-12-13 19:13:39.380981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.837  [2024-12-13 19:13:39.381020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.386537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eeb760
00:29:07.837  [2024-12-13 19:13:39.387318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.837  [2024-12-13 19:13:39.387389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.398095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eef270
00:29:07.837  [2024-12-13 19:13:39.399435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.837  [2024-12-13 19:13:39.399476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.407267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef35f0
00:29:07.837  [2024-12-13 19:13:39.408410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.837  [2024-12-13 19:13:39.408450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.416764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efbcf0
00:29:07.837  [2024-12-13 19:13:39.417861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.837  [2024-12-13 19:13:39.417909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.428471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee84c0
00:29:07.837  [2024-12-13 19:13:39.430147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.837  [2024-12-13 19:13:39.430192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.435867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee95a0
00:29:07.837  [2024-12-13 19:13:39.436718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.837  [2024-12-13 19:13:39.436758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.447567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee4140
00:29:07.837  [2024-12-13 19:13:39.448928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.837  [2024-12-13 19:13:39.448968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.456806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef57b0
00:29:07.837  [2024-12-13 19:13:39.458110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.837  [2024-12-13 19:13:39.458154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.466902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee5220
00:29:07.837  [2024-12-13 19:13:39.468055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.837  [2024-12-13 19:13:39.468096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.478712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eea680
00:29:07.837  [2024-12-13 19:13:39.480366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.837  [2024-12-13 19:13:39.480406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.485849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee73e0
00:29:07.837  [2024-12-13 19:13:39.486800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.837  [2024-12-13 19:13:39.486838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.495382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef6020
00:29:07.837  [2024-12-13 19:13:39.496180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.837  [2024-12-13 19:13:39.496244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.506740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016edfdc0
00:29:07.837  [2024-12-13 19:13:39.507896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.837  [2024-12-13 19:13:39.507936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.516730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efb480
00:29:07.837  [2024-12-13 19:13:39.518191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.837  [2024-12-13 19:13:39.518260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.526163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efd208
00:29:07.837  [2024-12-13 19:13:39.527475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.837  [2024-12-13 19:13:39.527518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.535724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ede470
00:29:07.837  [2024-12-13 19:13:39.536906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.837  [2024-12-13 19:13:39.536946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.547268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee7818
00:29:07.837  [2024-12-13 19:13:39.548955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.837  [2024-12-13 19:13:39.548995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.554308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef0bc0
00:29:07.837  [2024-12-13 19:13:39.555252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.837  [2024-12-13 19:13:39.555308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.565936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef92c0
00:29:07.837  [2024-12-13 19:13:39.567420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.837  [2024-12-13 19:13:39.567459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.575003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eeb760
00:29:07.837  [2024-12-13 19:13:39.576417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.837  [2024-12-13 19:13:39.576458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.584475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eed4e8
00:29:07.837  [2024-12-13 19:13:39.585739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.837  [2024-12-13 19:13:39.585779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:29:07.837  [2024-12-13 19:13:39.593504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee7818
00:29:07.837  [2024-12-13 19:13:39.594619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.838  [2024-12-13 19:13:39.594664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:29:07.838  [2024-12-13 19:13:39.602884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee9168
00:29:07.838  [2024-12-13 19:13:39.603906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.838  [2024-12-13 19:13:39.603945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:29:07.838  [2024-12-13 19:13:39.614458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef7100
00:29:07.838  [2024-12-13 19:13:39.616044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.838  [2024-12-13 19:13:39.616085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:29:07.838  [2024-12-13 19:13:39.621623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee9e10
00:29:07.838  [2024-12-13 19:13:39.622420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.838  [2024-12-13 19:13:39.622462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:29:07.838  [2024-12-13 19:13:39.633394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eff3c8
00:29:07.838  [2024-12-13 19:13:39.634424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.838  [2024-12-13 19:13:39.634467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:29:07.838  [2024-12-13 19:13:39.643032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef4f40
00:29:07.838  [2024-12-13 19:13:39.644341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.838  [2024-12-13 19:13:39.644381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:29:07.838  [2024-12-13 19:13:39.652187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee12d8
00:29:07.838  [2024-12-13 19:13:39.653337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:07.838  [2024-12-13 19:13:39.653376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:29:08.110  [2024-12-13 19:13:39.661583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eea680
00:29:08.110  [2024-12-13 19:13:39.662665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.110  [2024-12-13 19:13:39.662705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:08.110  [2024-12-13 19:13:39.673258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee2c28
00:29:08.110  [2024-12-13 19:13:39.674851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.110  [2024-12-13 19:13:39.674896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:29:08.110  [2024-12-13 19:13:39.680333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee3d08
00:29:08.110  [2024-12-13 19:13:39.681168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.110  [2024-12-13 19:13:39.681208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:29:08.110  [2024-12-13 19:13:39.691994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efd208
00:29:08.110  [2024-12-13 19:13:39.693382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.110  [2024-12-13 19:13:39.693423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0
00:29:08.110  [2024-12-13 19:13:39.701035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eeea00
00:29:08.110  [2024-12-13 19:13:39.702303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.110  [2024-12-13 19:13:39.702347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:29:08.110  [2024-12-13 19:13:39.710500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef3e60
00:29:08.110  [2024-12-13 19:13:39.711652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.110  [2024-12-13 19:13:39.711692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:29:08.110  [2024-12-13 19:13:39.722263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef2510
00:29:08.110  [2024-12-13 19:13:39.723894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.110  [2024-12-13 19:13:39.723934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:29:08.110  [2024-12-13 19:13:39.729359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee1b48
00:29:08.110  [2024-12-13 19:13:39.730292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.110  [2024-12-13 19:13:39.730336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:29:08.110  [2024-12-13 19:13:39.741027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eeb760
00:29:08.110  [2024-12-13 19:13:39.742549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.110  [2024-12-13 19:13:39.742592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:29:08.110  [2024-12-13 19:13:39.750140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee01f8
00:29:08.110  [2024-12-13 19:13:39.751422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.110  [2024-12-13 19:13:39.751465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:29:08.110  [2024-12-13 19:13:39.759481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef6020
00:29:08.110  [2024-12-13 19:13:39.760634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.110  [2024-12-13 19:13:39.760673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:29:08.110  [2024-12-13 19:13:39.771098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef0350
00:29:08.110  [2024-12-13 19:13:39.772824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.110  [2024-12-13 19:13:39.772862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:29:08.110  [2024-12-13 19:13:39.778186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eebfd0
00:29:08.110  [2024-12-13 19:13:39.779119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.110  [2024-12-13 19:13:39.779160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:29:08.110  [2024-12-13 19:13:39.789738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee7c50
00:29:08.110  [2024-12-13 19:13:39.791172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.110  [2024-12-13 19:13:39.791212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:29:08.110  [2024-12-13 19:13:39.796753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ede8a8
00:29:08.110  [2024-12-13 19:13:39.797440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.110  [2024-12-13 19:13:39.797483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:29:08.110  [2024-12-13 19:13:39.808196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef81e0
00:29:08.110  [2024-12-13 19:13:39.809445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.110  [2024-12-13 19:13:39.809486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:29:08.110  [2024-12-13 19:13:39.817146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef1430
00:29:08.110  [2024-12-13 19:13:39.818246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.110  [2024-12-13 19:13:39.818309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:29:08.110  [2024-12-13 19:13:39.826585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efdeb0
00:29:08.110  [2024-12-13 19:13:39.827578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.110  [2024-12-13 19:13:39.827617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001d p:0 m:0 dnr:0
00:29:08.110  [2024-12-13 19:13:39.838128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee12d8
00:29:08.110  [2024-12-13 19:13:39.839679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.110  [2024-12-13 19:13:39.839719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:29:08.110  [2024-12-13 19:13:39.845160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee0a68
00:29:08.110  [2024-12-13 19:13:39.845942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.111  [2024-12-13 19:13:39.845988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:08.111  [2024-12-13 19:13:39.856863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee3498
00:29:08.111  [2024-12-13 19:13:39.857839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.111  [2024-12-13 19:13:39.857886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:29:08.111  [2024-12-13 19:13:39.865974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ede8a8
00:29:08.111  [2024-12-13 19:13:39.866800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.111  [2024-12-13 19:13:39.866841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:29:08.111  [2024-12-13 19:13:39.875061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eeff18
00:29:08.111  [2024-12-13 19:13:39.875734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.111  [2024-12-13 19:13:39.875790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:29:08.111  [2024-12-13 19:13:39.883639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef2510
00:29:08.111  [2024-12-13 19:13:39.884418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.111  [2024-12-13 19:13:39.884457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:08.111  [2024-12-13 19:13:39.895167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eef270
00:29:08.111  [2024-12-13 19:13:39.896484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.111  [2024-12-13 19:13:39.896527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:29:08.111  [2024-12-13 19:13:39.904376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efb480
00:29:08.111  [2024-12-13 19:13:39.905705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.111  [2024-12-13 19:13:39.905772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:29:08.111  [2024-12-13 19:13:39.915015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eee190
00:29:08.111  [2024-12-13 19:13:39.916160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.111  [2024-12-13 19:13:39.916204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:29:08.111  [2024-12-13 19:13:39.925831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eff3c8
00:29:08.111  [2024-12-13 19:13:39.926550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.111  [2024-12-13 19:13:39.926609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0
00:29:08.370  [2024-12-13 19:13:39.939682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef6cc8
00:29:08.370  [2024-12-13 19:13:39.941456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.370  [2024-12-13 19:13:39.941500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:29:08.370  [2024-12-13 19:13:39.947730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef6458
00:29:08.370  [2024-12-13 19:13:39.948679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.370  [2024-12-13 19:13:39.948722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:29:08.370  [2024-12-13 19:13:39.960856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef35f0
00:29:08.370  [2024-12-13 19:13:39.962400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.370  [2024-12-13 19:13:39.962445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:29:08.370  [2024-12-13 19:13:39.970806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eeee38
00:29:08.370  [2024-12-13 19:13:39.972116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.370  [2024-12-13 19:13:39.972159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:29:08.370  [2024-12-13 19:13:39.980787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee38d0
00:29:08.370  [2024-12-13 19:13:39.982011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.370  [2024-12-13 19:13:39.982074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:29:08.370  [2024-12-13 19:13:39.993022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efdeb0
00:29:08.370  [2024-12-13 19:13:39.994866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.370  [2024-12-13 19:13:39.994909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:29:08.370  [2024-12-13 19:13:40.000808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eed920
00:29:08.370  [2024-12-13 19:13:40.001883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.370  [2024-12-13 19:13:40.001935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:29:08.370  [2024-12-13 19:13:40.014104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee4140
00:29:08.370  [2024-12-13 19:13:40.015693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.370  [2024-12-13 19:13:40.015734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:29:08.370  [2024-12-13 19:13:40.024579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ede470
00:29:08.370  [2024-12-13 19:13:40.026016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.370  [2024-12-13 19:13:40.026098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:29:08.370  [2024-12-13 19:13:40.035486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef2510
00:29:08.370  [2024-12-13 19:13:40.036753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.370  [2024-12-13 19:13:40.036795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:29:08.370  [2024-12-13 19:13:40.045367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef7da8
00:29:08.370  [2024-12-13 19:13:40.046499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.370  [2024-12-13 19:13:40.046548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:29:08.370  [2024-12-13 19:13:40.055922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efa7d8
00:29:08.370  [2024-12-13 19:13:40.056947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.370  [2024-12-13 19:13:40.056990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:29:08.370  [2024-12-13 19:13:40.068664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee8088
00:29:08.370  [2024-12-13 19:13:40.070291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.370  [2024-12-13 19:13:40.070339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:29:08.370  [2024-12-13 19:13:40.076516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ede470
00:29:08.370  [2024-12-13 19:13:40.077335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.370  [2024-12-13 19:13:40.077393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:29:08.370  [2024-12-13 19:13:40.088906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef0350
00:29:08.370  [2024-12-13 19:13:40.090417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.370  [2024-12-13 19:13:40.090463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:29:08.370  [2024-12-13 19:13:40.099387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eeea00
00:29:08.370  [2024-12-13 19:13:40.100215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.371  [2024-12-13 19:13:40.100299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:29:08.371  [2024-12-13 19:13:40.109169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef96f8
00:29:08.371  [2024-12-13 19:13:40.109874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.371  [2024-12-13 19:13:40.109925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:29:08.371  [2024-12-13 19:13:40.119811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee4de8
00:29:08.371  [2024-12-13 19:13:40.121036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.371  [2024-12-13 19:13:40.121101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:29:08.371  [2024-12-13 19:13:40.133962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eeaef0
00:29:08.371  [2024-12-13 19:13:40.135835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.371  [2024-12-13 19:13:40.135883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:29:08.371  [2024-12-13 19:13:40.142625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee9e10
00:29:08.371  [2024-12-13 19:13:40.143512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.371  [2024-12-13 19:13:40.143587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:08.371  [2024-12-13 19:13:40.156293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efb480
00:29:08.371  [2024-12-13 19:13:40.157757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.371  [2024-12-13 19:13:40.157800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:29:08.371  [2024-12-13 19:13:40.166227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eecc78
00:29:08.371  [2024-12-13 19:13:40.167493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.371  [2024-12-13 19:13:40.167535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:29:08.371  [2024-12-13 19:13:40.176443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef3e60
00:29:08.371  [2024-12-13 19:13:40.177642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.371  [2024-12-13 19:13:40.177683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:29:08.371  [2024-12-13 19:13:40.188705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee2c28
00:29:08.371  [2024-12-13 19:13:40.190533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.371  [2024-12-13 19:13:40.190579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:29:08.631      25221.00 IOPS,    98.52 MiB/s
[2024-12-13T19:13:40.455Z] [2024-12-13 19:13:40.197980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef4b08
00:29:08.631  [2024-12-13 19:13:40.199166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.631  [2024-12-13 19:13:40.199207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:29:08.631  [2024-12-13 19:13:40.210758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee5658
00:29:08.631  [2024-12-13 19:13:40.212551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.631  [2024-12-13 19:13:40.212595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:29:08.631  [2024-12-13 19:13:40.218542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efb480
00:29:08.631  [2024-12-13 19:13:40.219486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.631  [2024-12-13 19:13:40.219529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:29:08.631  [2024-12-13 19:13:40.230733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef9f68
00:29:08.631  [2024-12-13 19:13:40.232132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.631  [2024-12-13 19:13:40.232172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:29:08.631  [2024-12-13 19:13:40.240111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef20d8
00:29:08.631  [2024-12-13 19:13:40.241436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.631  [2024-12-13 19:13:40.241479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:29:08.631  [2024-12-13 19:13:40.249570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef6cc8
00:29:08.631  [2024-12-13 19:13:40.250792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.631  [2024-12-13 19:13:40.250834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:29:08.631  [2024-12-13 19:13:40.261460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef2948
00:29:08.631  [2024-12-13 19:13:40.263204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.631  [2024-12-13 19:13:40.263266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:29:08.631  [2024-12-13 19:13:40.268647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efeb58
00:29:08.631  [2024-12-13 19:13:40.269605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.631  [2024-12-13 19:13:40.269656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:29:08.631  [2024-12-13 19:13:40.280306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef4298
00:29:08.631  [2024-12-13 19:13:40.281856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.631  [2024-12-13 19:13:40.281904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:29:08.631  [2024-12-13 19:13:40.289450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee49b0
00:29:08.631  [2024-12-13 19:13:40.290865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.631  [2024-12-13 19:13:40.290912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:29:08.631  [2024-12-13 19:13:40.299012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee7818
00:29:08.631  [2024-12-13 19:13:40.300322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.631  [2024-12-13 19:13:40.300357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:29:08.631  [2024-12-13 19:13:40.308298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef2948
00:29:08.631  [2024-12-13 19:13:40.309396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.631  [2024-12-13 19:13:40.309435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:29:08.631  [2024-12-13 19:13:40.317777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef8e88
00:29:08.631  [2024-12-13 19:13:40.318814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.631  [2024-12-13 19:13:40.318853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:29:08.631  [2024-12-13 19:13:40.329365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eea680
00:29:08.631  [2024-12-13 19:13:40.330977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.631  [2024-12-13 19:13:40.331021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:29:08.631  [2024-12-13 19:13:40.336495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef1868
00:29:08.631  [2024-12-13 19:13:40.337281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.631  [2024-12-13 19:13:40.337333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:29:08.631  [2024-12-13 19:13:40.348132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eebb98
00:29:08.631  [2024-12-13 19:13:40.349473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.631  [2024-12-13 19:13:40.349516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0
00:29:08.631  [2024-12-13 19:13:40.357288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee5658
00:29:08.631  [2024-12-13 19:13:40.358509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.631  [2024-12-13 19:13:40.358553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:29:08.631  [2024-12-13 19:13:40.366948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef35f0
00:29:08.631  [2024-12-13 19:13:40.368069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.631  [2024-12-13 19:13:40.368109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:29:08.631  [2024-12-13 19:13:40.378708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee3d08
00:29:08.631  [2024-12-13 19:13:40.380348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.632  [2024-12-13 19:13:40.380389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:29:08.632  [2024-12-13 19:13:40.385843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee9168
00:29:08.632  [2024-12-13 19:13:40.386715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.632  [2024-12-13 19:13:40.386755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:29:08.632  [2024-12-13 19:13:40.397534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eecc78
00:29:08.632  [2024-12-13 19:13:40.398963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.632  [2024-12-13 19:13:40.399005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:29:08.632  [2024-12-13 19:13:40.406341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efb480
00:29:08.632  [2024-12-13 19:13:40.407332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.632  [2024-12-13 19:13:40.407393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:29:08.632  [2024-12-13 19:13:40.417093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee0a68
00:29:08.632  [2024-12-13 19:13:40.418626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.632  [2024-12-13 19:13:40.418682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:29:08.632  [2024-12-13 19:13:40.426053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef3e60
00:29:08.632  [2024-12-13 19:13:40.426937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.632  [2024-12-13 19:13:40.426977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:29:08.632  [2024-12-13 19:13:40.437989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eeaef0
00:29:08.632  [2024-12-13 19:13:40.439726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.632  [2024-12-13 19:13:40.439766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:29:08.632  [2024-12-13 19:13:40.445375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee3498
00:29:08.632  [2024-12-13 19:13:40.446330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.632  [2024-12-13 19:13:40.446373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:29:08.891  [2024-12-13 19:13:40.457138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee0ea0
00:29:08.891  [2024-12-13 19:13:40.458661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.891  [2024-12-13 19:13:40.458717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:29:08.891  [2024-12-13 19:13:40.466490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef92c0
00:29:08.892  [2024-12-13 19:13:40.467836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.467877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.476108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eefae0
00:29:08.892  [2024-12-13 19:13:40.477315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.477378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.487791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef0ff8
00:29:08.892  [2024-12-13 19:13:40.489479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.489518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.494870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016edf988
00:29:08.892  [2024-12-13 19:13:40.495844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.495882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.506509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eff3c8
00:29:08.892  [2024-12-13 19:13:40.508017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.508057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.515740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eed0b0
00:29:08.892  [2024-12-13 19:13:40.517141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.517182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.525765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee6fa8
00:29:08.892  [2024-12-13 19:13:40.527006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.527045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.535022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef0788
00:29:08.892  [2024-12-13 19:13:40.536097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.536138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.544434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efc560
00:29:08.892  [2024-12-13 19:13:40.545443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.545482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.556058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee5a90
00:29:08.892  [2024-12-13 19:13:40.557615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.557664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.563230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee1b48
00:29:08.892  [2024-12-13 19:13:40.564013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.564053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.575026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef6458
00:29:08.892  [2024-12-13 19:13:40.576329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.576369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.584144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee8d30
00:29:08.892  [2024-12-13 19:13:40.585304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.585370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.593590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee01f8
00:29:08.892  [2024-12-13 19:13:40.594743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.594784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.605312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee6b70
00:29:08.892  [2024-12-13 19:13:40.606952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.606996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.614921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef7100
00:29:08.892  [2024-12-13 19:13:40.616432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.616473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.622167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efd640
00:29:08.892  [2024-12-13 19:13:40.623008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.623048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.633848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef9f68
00:29:08.892  [2024-12-13 19:13:40.635202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.635252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.642977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ede038
00:29:08.892  [2024-12-13 19:13:40.644179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.644245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.652576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efa7d8
00:29:08.892  [2024-12-13 19:13:40.653711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.653777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.664176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef8e88
00:29:08.892  [2024-12-13 19:13:40.665970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.666047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.671498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee84c0
00:29:08.892  [2024-12-13 19:13:40.672396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.672435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.683274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee6b70
00:29:08.892  [2024-12-13 19:13:40.684709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.684748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.692444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efdeb0
00:29:08.892  [2024-12-13 19:13:40.693722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.693778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.701937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef0788
00:29:08.892  [2024-12-13 19:13:40.703160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:08.892  [2024-12-13 19:13:40.703199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:29:08.892  [2024-12-13 19:13:40.713583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef35f0
00:29:09.151  [2024-12-13 19:13:40.715353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.151  [2024-12-13 19:13:40.715396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:29:09.151  [2024-12-13 19:13:40.720820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef7970
00:29:09.151  [2024-12-13 19:13:40.721836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.151  [2024-12-13 19:13:40.721879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:29:09.151  [2024-12-13 19:13:40.732521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee0a68
00:29:09.151  [2024-12-13 19:13:40.734072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.151  [2024-12-13 19:13:40.734117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:29:09.151  [2024-12-13 19:13:40.741807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee5220
00:29:09.151  [2024-12-13 19:13:40.743199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.151  [2024-12-13 19:13:40.743268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:29:09.151  [2024-12-13 19:13:40.751333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee8d30
00:29:09.151  [2024-12-13 19:13:40.752603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.151  [2024-12-13 19:13:40.752643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0
00:29:09.152  [2024-12-13 19:13:40.760551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef2d80
00:29:09.152  [2024-12-13 19:13:40.761659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.152  [2024-12-13 19:13:40.761699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:29:09.152  [2024-12-13 19:13:40.770030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef57b0
00:29:09.152  [2024-12-13 19:13:40.771074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.152  [2024-12-13 19:13:40.771115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0
00:29:09.152  [2024-12-13 19:13:40.781793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eec408
00:29:09.152  [2024-12-13 19:13:40.783360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.152  [2024-12-13 19:13:40.783402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:29:09.152  [2024-12-13 19:13:40.788884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efdeb0
00:29:09.152  [2024-12-13 19:13:40.789683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.152  [2024-12-13 19:13:40.789745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:29:09.152  [2024-12-13 19:13:40.800660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee3498
00:29:09.152  [2024-12-13 19:13:40.802009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.152  [2024-12-13 19:13:40.802069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:29:09.152  [2024-12-13 19:13:40.809859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efd640
00:29:09.152  [2024-12-13 19:13:40.811018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.152  [2024-12-13 19:13:40.811058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:29:09.152  [2024-12-13 19:13:40.819405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016edf988
00:29:09.152  [2024-12-13 19:13:40.820506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.152  [2024-12-13 19:13:40.820544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:29:09.152  [2024-12-13 19:13:40.831025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ede038
00:29:09.152  [2024-12-13 19:13:40.832661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.152  [2024-12-13 19:13:40.832701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:29:09.152  [2024-12-13 19:13:40.838143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ede038
00:29:09.152  [2024-12-13 19:13:40.839001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.152  [2024-12-13 19:13:40.839041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:29:09.152  [2024-12-13 19:13:40.849688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016edf988
00:29:09.152  [2024-12-13 19:13:40.851156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.152  [2024-12-13 19:13:40.851195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:29:09.152  [2024-12-13 19:13:40.858949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee84c0
00:29:09.152  [2024-12-13 19:13:40.860245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.152  [2024-12-13 19:13:40.860301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:29:09.152  [2024-12-13 19:13:40.868610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee3498
00:29:09.152  [2024-12-13 19:13:40.869783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.152  [2024-12-13 19:13:40.869827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:29:09.152  [2024-12-13 19:13:40.880264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efdeb0
00:29:09.152  [2024-12-13 19:13:40.881963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.152  [2024-12-13 19:13:40.882009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:29:09.152  [2024-12-13 19:13:40.887481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eec408
00:29:09.152  [2024-12-13 19:13:40.888426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.152  [2024-12-13 19:13:40.888464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:29:09.152  [2024-12-13 19:13:40.899136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef57b0
00:29:09.152  [2024-12-13 19:13:40.900646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.152  [2024-12-13 19:13:40.900688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0
00:29:09.152  [2024-12-13 19:13:40.908351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef7970
00:29:09.152  [2024-12-13 19:13:40.909651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.152  [2024-12-13 19:13:40.909691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:29:09.152  [2024-12-13 19:13:40.917816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee8d30
00:29:09.152  [2024-12-13 19:13:40.919032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.152  [2024-12-13 19:13:40.919072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0
00:29:09.152  [2024-12-13 19:13:40.929544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee5220
00:29:09.152  [2024-12-13 19:13:40.931422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.152  [2024-12-13 19:13:40.931468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:29:09.152  [2024-12-13 19:13:40.936949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee0a68
00:29:09.152  [2024-12-13 19:13:40.937970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.152  [2024-12-13 19:13:40.938016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:29:09.152  [2024-12-13 19:13:40.948539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef2d80
00:29:09.152  [2024-12-13 19:13:40.950094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.152  [2024-12-13 19:13:40.950139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:29:09.152  [2024-12-13 19:13:40.955762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef7100
00:29:09.152  [2024-12-13 19:13:40.956528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.152  [2024-12-13 19:13:40.956567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:29:09.152  [2024-12-13 19:13:40.967428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef0788
00:29:09.152  [2024-12-13 19:13:40.968689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.152  [2024-12-13 19:13:40.968730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:29:09.411  [2024-12-13 19:13:40.976528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efdeb0
00:29:09.411  [2024-12-13 19:13:40.977695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.411  [2024-12-13 19:13:40.977793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:29:09.411  [2024-12-13 19:13:40.986232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee6b70
00:29:09.411  [2024-12-13 19:13:40.987274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.411  [2024-12-13 19:13:40.987326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:29:09.411  [2024-12-13 19:13:40.997888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efd640
00:29:09.411  [2024-12-13 19:13:40.999444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.411  [2024-12-13 19:13:40.999483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:29:09.411  [2024-12-13 19:13:41.004956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee7c50
00:29:09.411  [2024-12-13 19:13:41.005815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.411  [2024-12-13 19:13:41.005859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:09.411  [2024-12-13 19:13:41.016760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efa7d8
00:29:09.411  [2024-12-13 19:13:41.018194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.411  [2024-12-13 19:13:41.018250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:29:09.411  [2024-12-13 19:13:41.026696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ede038
00:29:09.411  [2024-12-13 19:13:41.027786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.411  [2024-12-13 19:13:41.027825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:29:09.411  [2024-12-13 19:13:41.036537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee9e10
00:29:09.411  [2024-12-13 19:13:41.037938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.411  [2024-12-13 19:13:41.037984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0
00:29:09.411  [2024-12-13 19:13:41.045784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eeff18
00:29:09.411  [2024-12-13 19:13:41.046991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.411  [2024-12-13 19:13:41.047032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:29:09.411  [2024-12-13 19:13:41.055243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee9e10
00:29:09.411  [2024-12-13 19:13:41.056383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.411  [2024-12-13 19:13:41.056424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:29:09.411  [2024-12-13 19:13:41.066998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efcdd0
00:29:09.411  [2024-12-13 19:13:41.068764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.411  [2024-12-13 19:13:41.068801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:29:09.411  [2024-12-13 19:13:41.074231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016efa7d8
00:29:09.411  [2024-12-13 19:13:41.075127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.411  [2024-12-13 19:13:41.075167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:29:09.411  [2024-12-13 19:13:41.086203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ede8a8
00:29:09.411  [2024-12-13 19:13:41.087775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.411  [2024-12-13 19:13:41.087816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:29:09.411  [2024-12-13 19:13:41.097768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef96f8
00:29:09.411  [2024-12-13 19:13:41.099249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.412  [2024-12-13 19:13:41.099304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:29:09.412  [2024-12-13 19:13:41.108789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee9168
00:29:09.412  [2024-12-13 19:13:41.110212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.412  [2024-12-13 19:13:41.110280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:29:09.412  [2024-12-13 19:13:41.120733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee5220
00:29:09.412  [2024-12-13 19:13:41.122533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.412  [2024-12-13 19:13:41.122579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:29:09.412  [2024-12-13 19:13:41.127927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef0788
00:29:09.412  [2024-12-13 19:13:41.128874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.412  [2024-12-13 19:13:41.128913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:29:09.412  [2024-12-13 19:13:41.140141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ef6890
00:29:09.412  [2024-12-13 19:13:41.141778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.412  [2024-12-13 19:13:41.141824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:29:09.412  [2024-12-13 19:13:41.150738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eecc78
00:29:09.412  [2024-12-13 19:13:41.152400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.412  [2024-12-13 19:13:41.152446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:29:09.412  [2024-12-13 19:13:41.162474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee0a68
00:29:09.412  [2024-12-13 19:13:41.163891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.412  [2024-12-13 19:13:41.163933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:29:09.412  [2024-12-13 19:13:41.172910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016ee5220
00:29:09.412  [2024-12-13 19:13:41.174196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.412  [2024-12-13 19:13:41.174270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:29:09.412  [2024-12-13 19:13:41.183373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22898e0) with pdu=0x200016eeea00
00:29:09.412  [2024-12-13 19:13:41.184435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:09.412  [2024-12-13 19:13:41.184478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:29:09.412      25401.50 IOPS,    99.22 MiB/s
00:29:09.412                                                                                                  Latency(us)
00:29:09.412  
[2024-12-13T19:13:41.236Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:29:09.412  Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:29:09.412  	 nvme0n1             :       2.00   25428.25      99.33       0.00     0.00    5028.47    2368.23   14417.92
00:29:09.412  
[2024-12-13T19:13:41.236Z]  ===================================================================================================================
00:29:09.412  
[2024-12-13T19:13:41.236Z]  Total                       :              25428.25      99.33       0.00     0.00    5028.47    2368.23   14417.92
00:29:09.412  {
00:29:09.412    "results": [
00:29:09.412      {
00:29:09.412        "job": "nvme0n1",
00:29:09.412        "core_mask": "0x2",
00:29:09.412        "workload": "randwrite",
00:29:09.412        "status": "finished",
00:29:09.412        "queue_depth": 128,
00:29:09.412        "io_size": 4096,
00:29:09.412        "runtime": 2.00293,
00:29:09.412        "iops": 25428.24761724074,
00:29:09.412        "mibps": 99.32909225484664,
00:29:09.412        "io_failed": 0,
00:29:09.412        "io_timeout": 0,
00:29:09.412        "avg_latency_us": 5028.468556924609,
00:29:09.412        "min_latency_us": 2368.232727272727,
00:29:09.412        "max_latency_us": 14417.92
00:29:09.412      }
00:29:09.412    ],
00:29:09.412    "core_count": 1
00:29:09.412  }
00:29:09.412    19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1
00:29:09.412    19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1
00:29:09.412    19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0]
00:29:09.412  			| .driver_specific
00:29:09.412  			| .nvme_error
00:29:09.412  			| .status_code
00:29:09.412  			| .command_transient_transport_error'
00:29:09.412    19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 199 > 0 ))
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 116444
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 116444 ']'
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 116444
00:29:09.979    19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:29:09.979    19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116444
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:29:09.979  killing process with pid 116444
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116444'
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 116444
00:29:09.979  Received shutdown signal, test time was about 2.000000 seconds
00:29:09.979  
00:29:09.979                                                                                                  Latency(us)
00:29:09.979  
[2024-12-13T19:13:41.803Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:29:09.979  
[2024-12-13T19:13:41.803Z]  ===================================================================================================================
00:29:09.979  
[2024-12-13T19:13:41.803Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 116444
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=116530
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 116530 /var/tmp/bperf.sock
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 116530 ']'
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100
00:29:09.979  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable
00:29:09.979   19:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:29:09.979  [2024-12-13 19:13:41.789396] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:29:09.979  I/O size of 131072 is greater than zero copy threshold (65536).
00:29:09.979  Zero copy mechanism will not be used.
00:29:09.979  [2024-12-13 19:13:41.789509] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116530 ]
00:29:10.238  [2024-12-13 19:13:41.930600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:29:10.238  [2024-12-13 19:13:41.971982] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:29:11.172   19:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:29:11.172   19:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0
00:29:11.172   19:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:29:11.172   19:13:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:29:11.430   19:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable
00:29:11.430   19:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:11.430   19:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:29:11.430   19:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:11.430   19:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:29:11.430   19:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:29:11.688  nvme0n1
00:29:11.688   19:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32
00:29:11.688   19:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:11.688   19:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:29:11.688   19:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:11.688   19:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests
00:29:11.688   19:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:29:11.688  I/O size of 131072 is greater than zero copy threshold (65536).
00:29:11.688  Zero copy mechanism will not be used.
00:29:11.688  Running I/O for 2 seconds...
00:29:11.688  [2024-12-13 19:13:43.498173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.688  [2024-12-13 19:13:43.498328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.688  [2024-12-13 19:13:43.498357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:11.688  [2024-12-13 19:13:43.503179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.688  [2024-12-13 19:13:43.503468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.688  [2024-12-13 19:13:43.503491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:11.688  [2024-12-13 19:13:43.508110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.688  [2024-12-13 19:13:43.508197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.688  [2024-12-13 19:13:43.508218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:11.949  [2024-12-13 19:13:43.512557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.512676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.512697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.516942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.517046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.517065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.521426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.521532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.521553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.525887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.525983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.526019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.530400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.530504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.530524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.534748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.534828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.534847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.539202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.539349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.539370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.543652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.543731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.543751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.547964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.548066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.548086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.552390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.552492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.552514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.556730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.556833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.556853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.561106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.561209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.561229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.565525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.565619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.565638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.569946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.570209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.570230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.574699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.574803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.574822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.579098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.579197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.579217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.583514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.583625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.583644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.587808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.587905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.587924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.592245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.592341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.592362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.596651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.596730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.596755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.601070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.601168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.601187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.605481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.605573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.605593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.609859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.610135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.610156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.614589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.614690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.614710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.618974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.619070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.619089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.623467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.950  [2024-12-13 19:13:43.623570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.950  [2024-12-13 19:13:43.623590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:11.950  [2024-12-13 19:13:43.627779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.627899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.627918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.632212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.632336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.632356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.636645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.636739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.636759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.641015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.641116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.641135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.645397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.645498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.645518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.649656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.649927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.649949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.654376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.654479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.654498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.658669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.658765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.658784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.663012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.663108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.663128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.667422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.667526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.667546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.671778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.671873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.671892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.676125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.676205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.676224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.680557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.680685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.680705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.684861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.684952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.684972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.689312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.689409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.689429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.693584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.693664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.693685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.697962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.698236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.698276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.702616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.702721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.702742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.706978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.707076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.707096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.711432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.711536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.711556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.715691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.715789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.715809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.719994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.720096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.720116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.724477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.724577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.724597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.728821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.728918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.728938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.733282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.733380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.733399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.737582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.737661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.737680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.741984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.742310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.951  [2024-12-13 19:13:43.742331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:11.951  [2024-12-13 19:13:43.746538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.951  [2024-12-13 19:13:43.746641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.952  [2024-12-13 19:13:43.746660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:11.952  [2024-12-13 19:13:43.750849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.952  [2024-12-13 19:13:43.750946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.952  [2024-12-13 19:13:43.750966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:11.952  [2024-12-13 19:13:43.755241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.952  [2024-12-13 19:13:43.755336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.952  [2024-12-13 19:13:43.755356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:11.952  [2024-12-13 19:13:43.759504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.952  [2024-12-13 19:13:43.759607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.952  [2024-12-13 19:13:43.759626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:11.952  [2024-12-13 19:13:43.763823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.952  [2024-12-13 19:13:43.763926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.952  [2024-12-13 19:13:43.763946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:11.952  [2024-12-13 19:13:43.768184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:11.952  [2024-12-13 19:13:43.768356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:11.952  [2024-12-13 19:13:43.768378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.212  [2024-12-13 19:13:43.772495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.212  [2024-12-13 19:13:43.772596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.212  [2024-12-13 19:13:43.772616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.212  [2024-12-13 19:13:43.776836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.212  [2024-12-13 19:13:43.776935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.212  [2024-12-13 19:13:43.776954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.212  [2024-12-13 19:13:43.781218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.212  [2024-12-13 19:13:43.781331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.212  [2024-12-13 19:13:43.781351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.212  [2024-12-13 19:13:43.785562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.212  [2024-12-13 19:13:43.785669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.212  [2024-12-13 19:13:43.785689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.212  [2024-12-13 19:13:43.789915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.212  [2024-12-13 19:13:43.790196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.212  [2024-12-13 19:13:43.790216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.212  [2024-12-13 19:13:43.794652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.212  [2024-12-13 19:13:43.794748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.212  [2024-12-13 19:13:43.794767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.212  [2024-12-13 19:13:43.799019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.212  [2024-12-13 19:13:43.799117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.212  [2024-12-13 19:13:43.799137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.212  [2024-12-13 19:13:43.803338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.212  [2024-12-13 19:13:43.803440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.212  [2024-12-13 19:13:43.803460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.212  [2024-12-13 19:13:43.807613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.212  [2024-12-13 19:13:43.807716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.212  [2024-12-13 19:13:43.807736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.212  [2024-12-13 19:13:43.812041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.212  [2024-12-13 19:13:43.812137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.212  [2024-12-13 19:13:43.812175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.212  [2024-12-13 19:13:43.816457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.212  [2024-12-13 19:13:43.816990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.212  [2024-12-13 19:13:43.817044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.212  [2024-12-13 19:13:43.820966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.212  [2024-12-13 19:13:43.821423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.212  [2024-12-13 19:13:43.821477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.212  [2024-12-13 19:13:43.824974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.212  [2024-12-13 19:13:43.825353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.212  [2024-12-13 19:13:43.825386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.212  [2024-12-13 19:13:43.829020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.212  [2024-12-13 19:13:43.829235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.212  [2024-12-13 19:13:43.829276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.212  [2024-12-13 19:13:43.833034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.212  [2024-12-13 19:13:43.833312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.212  [2024-12-13 19:13:43.833360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.212  [2024-12-13 19:13:43.837218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.212  [2024-12-13 19:13:43.837672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.212  [2024-12-13 19:13:43.837751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.841622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.841771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.841795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.845838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.846132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.846155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.850312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.850461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.850482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.854313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.854444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.854465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.858294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.858420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.858441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.862282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.862448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.862468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.866225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.866508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.866530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.870786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.870944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.870964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.874769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.874930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.874949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.878776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.878942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.878969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.882840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.883016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.883035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.886875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.887044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.887064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.891011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.891126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.891146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.895229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.895372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.895393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.899211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.899410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.899431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.903330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.903492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.903513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.907261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.907415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.907434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.911222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.911402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.911423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.915289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.915407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.915426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.919491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.919608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.919632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.923583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.923698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.923719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.927519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.927657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.927677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.931508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.931639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.931660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.935529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.935667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.935687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.939512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.939623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.939644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.943685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.943802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.943822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.947811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.947926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.947946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.213  [2024-12-13 19:13:43.951819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.213  [2024-12-13 19:13:43.951939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.213  [2024-12-13 19:13:43.951960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.214  [2024-12-13 19:13:43.955898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.214  [2024-12-13 19:13:43.956078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.214  [2024-12-13 19:13:43.956098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.214  [2024-12-13 19:13:43.959993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.214  [2024-12-13 19:13:43.960146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.214  [2024-12-13 19:13:43.960166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.214  [2024-12-13 19:13:43.964016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.214  [2024-12-13 19:13:43.964194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.214  [2024-12-13 19:13:43.964214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.214  [2024-12-13 19:13:43.968493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.214  [2024-12-13 19:13:43.968738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.214  [2024-12-13 19:13:43.968759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.214  [2024-12-13 19:13:43.972750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.214  [2024-12-13 19:13:43.972906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.214  [2024-12-13 19:13:43.972926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.214  [2024-12-13 19:13:43.976740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.214  [2024-12-13 19:13:43.976853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.214  [2024-12-13 19:13:43.976872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.214  [2024-12-13 19:13:43.980778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.214  [2024-12-13 19:13:43.980913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.214  [2024-12-13 19:13:43.980933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.214  [2024-12-13 19:13:43.984790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.214  [2024-12-13 19:13:43.984957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.214  [2024-12-13 19:13:43.984977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.214  [2024-12-13 19:13:43.988830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.214  [2024-12-13 19:13:43.988968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.214  [2024-12-13 19:13:43.988988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.214  [2024-12-13 19:13:43.992991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.214  [2024-12-13 19:13:43.993125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.214  [2024-12-13 19:13:43.993145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.214  [2024-12-13 19:13:43.996991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.214  [2024-12-13 19:13:43.997120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.214  [2024-12-13 19:13:43.997141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.214  [2024-12-13 19:13:44.001006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.214  [2024-12-13 19:13:44.001163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.214  [2024-12-13 19:13:44.001183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.214  [2024-12-13 19:13:44.004988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.214  [2024-12-13 19:13:44.005102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.214  [2024-12-13 19:13:44.005124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.214  [2024-12-13 19:13:44.009009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.214  [2024-12-13 19:13:44.009165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.214  [2024-12-13 19:13:44.009184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.214  [2024-12-13 19:13:44.013052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.214  [2024-12-13 19:13:44.013170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.214  [2024-12-13 19:13:44.013191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.214  [2024-12-13 19:13:44.017254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.214  [2024-12-13 19:13:44.017431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.214  [2024-12-13 19:13:44.017452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.214  [2024-12-13 19:13:44.021142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.214  [2024-12-13 19:13:44.021367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.214  [2024-12-13 19:13:44.021405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.214  [2024-12-13 19:13:44.025118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.214  [2024-12-13 19:13:44.025264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.214  [2024-12-13 19:13:44.025301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.214  [2024-12-13 19:13:44.029105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.214  [2024-12-13 19:13:44.029216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.214  [2024-12-13 19:13:44.029269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.474  [2024-12-13 19:13:44.033121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.474  [2024-12-13 19:13:44.033303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.474  [2024-12-13 19:13:44.033326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.037280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.037464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.037484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.041366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.041521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.041542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.045275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.045405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.045425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.049202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.049390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.049411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.053198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.053355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.053376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.057118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.057279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.057299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.061124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.061283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.061305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.065275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.065416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.065439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.069265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.069386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.069407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.073193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.073346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.073384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.077258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.077384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.077404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.081224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.081426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.081446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.085115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.085319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.085340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.089344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.089467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.089488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.093264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.093383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.093403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.097135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.097258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.097307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.101062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.101217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.101253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.104962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.105077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.105096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.108858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.108971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.108992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.112714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.112862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.112882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.116588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.116704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.116724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.120394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.120577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.120597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.124235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.124392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.124412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.128208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.128353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.128388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.132174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.132315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.132335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.136099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.136211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.136230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.139972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.140080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.475  [2024-12-13 19:13:44.140100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.475  [2024-12-13 19:13:44.143891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.475  [2024-12-13 19:13:44.144003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.144023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.147836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.147946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.147965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.151788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.151912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.151931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.155606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.155758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.155777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.159453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.159608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.159628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.163267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.163428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.163447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.167072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.167225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.167274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.170975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.171140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.171160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.174933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.175094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.175114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.178858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.178982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.179001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.182873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.183035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.183054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.186833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.186983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.187002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.190688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.190848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.190868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.194571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.194688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.194707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.198366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.198527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.198546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.202163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.202454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.202476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.206431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.206581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.206617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.210714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.210869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.210892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.214859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.215098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.215135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.219319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.219507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.219530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.223480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.223709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.223730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.228017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.228186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.228208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.232418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.232651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.232674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.236704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.236876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.236928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.240927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.241143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.241165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.245140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.245367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.245391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.249303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.249491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.249512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.253310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.476  [2024-12-13 19:13:44.253472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.476  [2024-12-13 19:13:44.253493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.476  [2024-12-13 19:13:44.257355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.477  [2024-12-13 19:13:44.257477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.477  [2024-12-13 19:13:44.257514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.477  [2024-12-13 19:13:44.261377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.477  [2024-12-13 19:13:44.261490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.477  [2024-12-13 19:13:44.261510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.477  [2024-12-13 19:13:44.265275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.477  [2024-12-13 19:13:44.265392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.477  [2024-12-13 19:13:44.265427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.477  [2024-12-13 19:13:44.269196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.477  [2024-12-13 19:13:44.269334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.477  [2024-12-13 19:13:44.269385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.477  [2024-12-13 19:13:44.273322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.477  [2024-12-13 19:13:44.273440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.477  [2024-12-13 19:13:44.273459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.477  [2024-12-13 19:13:44.277145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.477  [2024-12-13 19:13:44.277290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.477  [2024-12-13 19:13:44.277310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.477  [2024-12-13 19:13:44.281136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.477  [2024-12-13 19:13:44.281252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.477  [2024-12-13 19:13:44.281272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.477  [2024-12-13 19:13:44.285056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.477  [2024-12-13 19:13:44.285187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.477  [2024-12-13 19:13:44.285222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.477  [2024-12-13 19:13:44.289140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.477  [2024-12-13 19:13:44.289287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.477  [2024-12-13 19:13:44.289308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.477  [2024-12-13 19:13:44.293093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.477  [2024-12-13 19:13:44.293228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.477  [2024-12-13 19:13:44.293293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.737  [2024-12-13 19:13:44.296984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.737  [2024-12-13 19:13:44.297123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.737  [2024-12-13 19:13:44.297143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.737  [2024-12-13 19:13:44.300926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.737  [2024-12-13 19:13:44.301096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.737  [2024-12-13 19:13:44.301115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.737  [2024-12-13 19:13:44.304879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.737  [2024-12-13 19:13:44.305055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.737  [2024-12-13 19:13:44.305075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.737  [2024-12-13 19:13:44.308933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.737  [2024-12-13 19:13:44.309059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.737  [2024-12-13 19:13:44.309079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.737  [2024-12-13 19:13:44.312920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.737  [2024-12-13 19:13:44.313058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.737  [2024-12-13 19:13:44.313078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.737  [2024-12-13 19:13:44.316890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.737  [2024-12-13 19:13:44.317068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.737  [2024-12-13 19:13:44.317088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.737  [2024-12-13 19:13:44.320930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.737  [2024-12-13 19:13:44.321124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.737  [2024-12-13 19:13:44.321145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.737  [2024-12-13 19:13:44.325006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.737  [2024-12-13 19:13:44.325197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.737  [2024-12-13 19:13:44.325218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.737  [2024-12-13 19:13:44.328980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.737  [2024-12-13 19:13:44.329107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.737  [2024-12-13 19:13:44.329127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.737  [2024-12-13 19:13:44.333030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.737  [2024-12-13 19:13:44.333207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.737  [2024-12-13 19:13:44.333228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.737  [2024-12-13 19:13:44.337045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.737  [2024-12-13 19:13:44.337181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.737  [2024-12-13 19:13:44.337201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.340970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.341100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.341119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.344884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.345085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.345106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.348916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.349036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.349056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.352925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.353024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.353043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.356903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.357073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.357108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.360939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.361078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.361115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.364964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.365079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.365099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.368824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.369002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.369021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.372862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.373030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.373065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.376927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.377099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.377135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.380916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.381124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.381146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.384922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.385116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.385136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.388975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.389100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.389119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.392968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.393136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.393171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.396852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.397025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.397044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.400846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.400988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.401008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.404796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.404933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.404952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.408722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.408889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.408909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.412668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.412846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.412866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.416585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.416776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.416795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.420585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.420741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.420762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.424659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.424814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.424835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.428602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.428775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.428794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.432539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.432698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.432721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.436544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.436718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.436738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.440555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.440684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.440704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.444630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.444770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.444805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.738  [2024-12-13 19:13:44.448659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.738  [2024-12-13 19:13:44.448790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.738  [2024-12-13 19:13:44.448810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.452648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.452793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.452813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.456678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.456808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.456828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.460675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.460831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.460851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.464672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.464825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.464846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.468616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.468752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.468772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.472593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.472730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.472749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.476501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.476628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.476648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.480414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.480558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.480578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.484357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.484529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.484548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.488252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.488385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.488404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.492142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.492290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.492310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.496123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.496244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.496276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.739       7475.00 IOPS,   934.38 MiB/s
[2024-12-13T19:13:44.563Z] [2024-12-13 19:13:44.501044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.501204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.501229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.505031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.505154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.505175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.509114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.509249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.509289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.513137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.513289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.513309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.517166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.517337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.517357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.521117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.521292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.521313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.525102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.525282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.525304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.529129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.529289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.529310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.533082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.533214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.533251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.537145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.537352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.537379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.541041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.541171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.541191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.545061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.545190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.545211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.549002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.549137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.549157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.552984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.553136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.553157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:12.739  [2024-12-13 19:13:44.556991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:12.739  [2024-12-13 19:13:44.557137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:12.739  [2024-12-13 19:13:44.557157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.000  [2024-12-13 19:13:44.560887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.000  [2024-12-13 19:13:44.561024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.000  [2024-12-13 19:13:44.561044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.000  [2024-12-13 19:13:44.564797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.000  [2024-12-13 19:13:44.564927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.000  [2024-12-13 19:13:44.564948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.000  [2024-12-13 19:13:44.568789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.000  [2024-12-13 19:13:44.568934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.000  [2024-12-13 19:13:44.568954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.000  [2024-12-13 19:13:44.572798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.000  [2024-12-13 19:13:44.572924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.000  [2024-12-13 19:13:44.572943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.000  [2024-12-13 19:13:44.576827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.000  [2024-12-13 19:13:44.577014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.000  [2024-12-13 19:13:44.577034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.000  [2024-12-13 19:13:44.580833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.000  [2024-12-13 19:13:44.581012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.000  [2024-12-13 19:13:44.581032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.000  [2024-12-13 19:13:44.584789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.000  [2024-12-13 19:13:44.584966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.000  [2024-12-13 19:13:44.584986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.000  [2024-12-13 19:13:44.588735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.000  [2024-12-13 19:13:44.588901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.000  [2024-12-13 19:13:44.588921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.000  [2024-12-13 19:13:44.592633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.000  [2024-12-13 19:13:44.592805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.000  [2024-12-13 19:13:44.592825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.000  [2024-12-13 19:13:44.596691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.000  [2024-12-13 19:13:44.596832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.000  [2024-12-13 19:13:44.596852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.000  [2024-12-13 19:13:44.600560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.000  [2024-12-13 19:13:44.600692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.000  [2024-12-13 19:13:44.600711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.000  [2024-12-13 19:13:44.604434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.000  [2024-12-13 19:13:44.604545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.000  [2024-12-13 19:13:44.604565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.000  [2024-12-13 19:13:44.608315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.000  [2024-12-13 19:13:44.608500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.000  [2024-12-13 19:13:44.608560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.000  [2024-12-13 19:13:44.612150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.000  [2024-12-13 19:13:44.612343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.000  [2024-12-13 19:13:44.612363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.000  [2024-12-13 19:13:44.616085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.000  [2024-12-13 19:13:44.616284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.000  [2024-12-13 19:13:44.616304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.000  [2024-12-13 19:13:44.620121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.000  [2024-12-13 19:13:44.620272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.000  [2024-12-13 19:13:44.620293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.000  [2024-12-13 19:13:44.624069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.000  [2024-12-13 19:13:44.624190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.000  [2024-12-13 19:13:44.624210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.000  [2024-12-13 19:13:44.628031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.000  [2024-12-13 19:13:44.628160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.000  [2024-12-13 19:13:44.628180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.000  [2024-12-13 19:13:44.631940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.000  [2024-12-13 19:13:44.632087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.000  [2024-12-13 19:13:44.632108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.000  [2024-12-13 19:13:44.635989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.000  [2024-12-13 19:13:44.636153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.000  [2024-12-13 19:13:44.636172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.000  [2024-12-13 19:13:44.639868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.000  [2024-12-13 19:13:44.640019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.000  [2024-12-13 19:13:44.640039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.000  [2024-12-13 19:13:44.643819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.000  [2024-12-13 19:13:44.643951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.000  [2024-12-13 19:13:44.643970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.000  [2024-12-13 19:13:44.647817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.000  [2024-12-13 19:13:44.647961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.000  [2024-12-13 19:13:44.647980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.651720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.651859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.651879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.655658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.655860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.655879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.659601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.659749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.659768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.663409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.663555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.663575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.667250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.667423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.667443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.671341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.671539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.671576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.675395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.675548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.675567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.679367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.679494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.679514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.683300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.683429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.683449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.687263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.687447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.687466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.691072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.691223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.691242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.695026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.695161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.695180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.698991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.699123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.699142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.702940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.703063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.703083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.706854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.706989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.707008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.710780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.710952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.710971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.714648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.714812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.714831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.718525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.718661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.718680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.722502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.722633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.722653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.726351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.726491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.726511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.730213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.730357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.730377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.734034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.734210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.734231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.738093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.738229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.738249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.742219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.742398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.742419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.746129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.746250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.746272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.749913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.750119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.750154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.753788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.753943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.001  [2024-12-13 19:13:44.753963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.001  [2024-12-13 19:13:44.757776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.001  [2024-12-13 19:13:44.757895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.002  [2024-12-13 19:13:44.757915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.002  [2024-12-13 19:13:44.761638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.002  [2024-12-13 19:13:44.761839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.002  [2024-12-13 19:13:44.761860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.002  [2024-12-13 19:13:44.765483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.002  [2024-12-13 19:13:44.765701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.002  [2024-12-13 19:13:44.765757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.002  [2024-12-13 19:13:44.769376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.002  [2024-12-13 19:13:44.769549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.002  [2024-12-13 19:13:44.769569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.002  [2024-12-13 19:13:44.773282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.002  [2024-12-13 19:13:44.773480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.002  [2024-12-13 19:13:44.773516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.002  [2024-12-13 19:13:44.777183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.002  [2024-12-13 19:13:44.777385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.002  [2024-12-13 19:13:44.777405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.002  [2024-12-13 19:13:44.781168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.002  [2024-12-13 19:13:44.781395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.002  [2024-12-13 19:13:44.781432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.002  [2024-12-13 19:13:44.785190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.002  [2024-12-13 19:13:44.785358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.002  [2024-12-13 19:13:44.785379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.002  [2024-12-13 19:13:44.789135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.002  [2024-12-13 19:13:44.789313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.002  [2024-12-13 19:13:44.789334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.002  [2024-12-13 19:13:44.792934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.002  [2024-12-13 19:13:44.793121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.002  [2024-12-13 19:13:44.793140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.002  [2024-12-13 19:13:44.796817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.002  [2024-12-13 19:13:44.797009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.002  [2024-12-13 19:13:44.797028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.002  [2024-12-13 19:13:44.800715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.002  [2024-12-13 19:13:44.800892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.002  [2024-12-13 19:13:44.800911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.002  [2024-12-13 19:13:44.804542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.002  [2024-12-13 19:13:44.804721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.002  [2024-12-13 19:13:44.804740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.002  [2024-12-13 19:13:44.808402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.002  [2024-12-13 19:13:44.808576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.002  [2024-12-13 19:13:44.808596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.002  [2024-12-13 19:13:44.812281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.002  [2024-12-13 19:13:44.812415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.002  [2024-12-13 19:13:44.812434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.002  [2024-12-13 19:13:44.816162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.002  [2024-12-13 19:13:44.816344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.002  [2024-12-13 19:13:44.816363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.002  [2024-12-13 19:13:44.820041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.002  [2024-12-13 19:13:44.820173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.002  [2024-12-13 19:13:44.820192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.262  [2024-12-13 19:13:44.824090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.262  [2024-12-13 19:13:44.824224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.262  [2024-12-13 19:13:44.824244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.262  [2024-12-13 19:13:44.827984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.262  [2024-12-13 19:13:44.828105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.262  [2024-12-13 19:13:44.828125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.262  [2024-12-13 19:13:44.832062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.262  [2024-12-13 19:13:44.832184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.262  [2024-12-13 19:13:44.832204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.262  [2024-12-13 19:13:44.836118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.262  [2024-12-13 19:13:44.836244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.262  [2024-12-13 19:13:44.836293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.262  [2024-12-13 19:13:44.840051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.262  [2024-12-13 19:13:44.840224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.262  [2024-12-13 19:13:44.840243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.262  [2024-12-13 19:13:44.844020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.262  [2024-12-13 19:13:44.844141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.262  [2024-12-13 19:13:44.844161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.262  [2024-12-13 19:13:44.848003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.262  [2024-12-13 19:13:44.848149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.262  [2024-12-13 19:13:44.848169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.262  [2024-12-13 19:13:44.851952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.262  [2024-12-13 19:13:44.852102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.262  [2024-12-13 19:13:44.852122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.262  [2024-12-13 19:13:44.855937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.262  [2024-12-13 19:13:44.856078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.262  [2024-12-13 19:13:44.856098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.262  [2024-12-13 19:13:44.859826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.262  [2024-12-13 19:13:44.859975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.262  [2024-12-13 19:13:44.859995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.262  [2024-12-13 19:13:44.863809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.262  [2024-12-13 19:13:44.863938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.262  [2024-12-13 19:13:44.863958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.262  [2024-12-13 19:13:44.867849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.262  [2024-12-13 19:13:44.868033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.262  [2024-12-13 19:13:44.868053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.262  [2024-12-13 19:13:44.871843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.871977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.871996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.875784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.875921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.875940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.879694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.879822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.879841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.883561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.883699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.883718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.887492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.887621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.887641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.891392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.891528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.891547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.895330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.895476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.895496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.899323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.899461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.899481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.903194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.903360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.903379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.907261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.907393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.907413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.911089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.911274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.911298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.914975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.915152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.915171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.918917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.919045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.919065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.923007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.923186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.923206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.927023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.927145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.927164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.930944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.931088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.931107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.934970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.935124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.935144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.939017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.939168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.939188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.942961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.943110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.943131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.946962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.947108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.947128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.950997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.951171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.951191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.954957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.955138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.955158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.958920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.959099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.959119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.962952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.963123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.963142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.966923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.967099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.967119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.970934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.971121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.971141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.974901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.975065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.975085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.978849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.263  [2024-12-13 19:13:44.979023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.263  [2024-12-13 19:13:44.979043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.263  [2024-12-13 19:13:44.982906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:44.983032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:44.983052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:44.986914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:44.987056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:44.987075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:44.990924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:44.991042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:44.991062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:44.994921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:44.995098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:44.995118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:44.998889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:44.999065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:44.999084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:45.002906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:45.003087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:45.003106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:45.006850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:45.007026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:45.007046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:45.010903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:45.011085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:45.011104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:45.014946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:45.015059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:45.015079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:45.018934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:45.019110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:45.019130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:45.022959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:45.023090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:45.023110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:45.026946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:45.027125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:45.027145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:45.030964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:45.031089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:45.031109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:45.034914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:45.035043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:45.035063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:45.038875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:45.039022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:45.039042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:45.042903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:45.043074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:45.043094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:45.046934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:45.047112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:45.047144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:45.050989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:45.051174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:45.051194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:45.055073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:45.055203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:45.055224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:45.059000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:45.059186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:45.059206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:45.062989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:45.063177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:45.063207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:45.066967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:45.067115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:45.067135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:45.071014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:45.071171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:45.071190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:45.075024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:45.075216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:45.075237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:45.078945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:45.079094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:45.079113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.264  [2024-12-13 19:13:45.082990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.264  [2024-12-13 19:13:45.083122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.264  [2024-12-13 19:13:45.083142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.087038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.524  [2024-12-13 19:13:45.087170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.524  [2024-12-13 19:13:45.087190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.091199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.524  [2024-12-13 19:13:45.091370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.524  [2024-12-13 19:13:45.091391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.095309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.524  [2024-12-13 19:13:45.095645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.524  [2024-12-13 19:13:45.095702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.099412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.524  [2024-12-13 19:13:45.099783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.524  [2024-12-13 19:13:45.099832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.103535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.524  [2024-12-13 19:13:45.103690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.524  [2024-12-13 19:13:45.103729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.107912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.524  [2024-12-13 19:13:45.108089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.524  [2024-12-13 19:13:45.108136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.112198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.524  [2024-12-13 19:13:45.112409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.524  [2024-12-13 19:13:45.112440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.116616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.524  [2024-12-13 19:13:45.116807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.524  [2024-12-13 19:13:45.116850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.120935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.524  [2024-12-13 19:13:45.121102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.524  [2024-12-13 19:13:45.121130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.125182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.524  [2024-12-13 19:13:45.125382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.524  [2024-12-13 19:13:45.125410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.129427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.524  [2024-12-13 19:13:45.129603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.524  [2024-12-13 19:13:45.129630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.133612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.524  [2024-12-13 19:13:45.133830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.524  [2024-12-13 19:13:45.133859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.137830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.524  [2024-12-13 19:13:45.138072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.524  [2024-12-13 19:13:45.138102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.142017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.524  [2024-12-13 19:13:45.142282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.524  [2024-12-13 19:13:45.142320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.146143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.524  [2024-12-13 19:13:45.146382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.524  [2024-12-13 19:13:45.146411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.150061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.524  [2024-12-13 19:13:45.150243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.524  [2024-12-13 19:13:45.150282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.154129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.524  [2024-12-13 19:13:45.154342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.524  [2024-12-13 19:13:45.154392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.158122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.524  [2024-12-13 19:13:45.158265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.524  [2024-12-13 19:13:45.158297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.162121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.524  [2024-12-13 19:13:45.162326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.524  [2024-12-13 19:13:45.162365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.166262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.524  [2024-12-13 19:13:45.166463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.524  [2024-12-13 19:13:45.166491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.170264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.524  [2024-12-13 19:13:45.170447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.524  [2024-12-13 19:13:45.170475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.174221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.524  [2024-12-13 19:13:45.174416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.524  [2024-12-13 19:13:45.174442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.178183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.524  [2024-12-13 19:13:45.178400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.524  [2024-12-13 19:13:45.178428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.524  [2024-12-13 19:13:45.182527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.182677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.182728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.186704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.186856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.186899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.190762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.190949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.190977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.194872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.195046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.195073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.198880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.199022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.199064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.202934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.203077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.203120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.206969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.207099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.207156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.211002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.211218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.211256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.215239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.215476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.215502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.219253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.219457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.219483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.223188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.223345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.223396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.227201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.227407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.227428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.231146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.231357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.231379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.235527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.235698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.235721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.239894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.240063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.240087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.244340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.244623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.244682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.248893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.249025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.249051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.253452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.253642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.253665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.257931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.258077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.258099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.262384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.262513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.262537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.266802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.266970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.266992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.271114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.271268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.271291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.275407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.275576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.275613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.279737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.279881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.279909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.285638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.286223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.286347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.291533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.291857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.291910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.525  [2024-12-13 19:13:45.297248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.525  [2024-12-13 19:13:45.297574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.525  [2024-12-13 19:13:45.297626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.526  [2024-12-13 19:13:45.302020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.526  [2024-12-13 19:13:45.302303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.526  [2024-12-13 19:13:45.302335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.526  [2024-12-13 19:13:45.306716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.526  [2024-12-13 19:13:45.306886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.526  [2024-12-13 19:13:45.306915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.526  [2024-12-13 19:13:45.311143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.526  [2024-12-13 19:13:45.311440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.526  [2024-12-13 19:13:45.311488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.526  [2024-12-13 19:13:45.315636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.526  [2024-12-13 19:13:45.315884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.526  [2024-12-13 19:13:45.315937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.526  [2024-12-13 19:13:45.320114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.526  [2024-12-13 19:13:45.320351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.526  [2024-12-13 19:13:45.320389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.526  [2024-12-13 19:13:45.324547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.526  [2024-12-13 19:13:45.324842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.526  [2024-12-13 19:13:45.324881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.526  [2024-12-13 19:13:45.328959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.526  [2024-12-13 19:13:45.329159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.526  [2024-12-13 19:13:45.329187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.526  [2024-12-13 19:13:45.333462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.526  [2024-12-13 19:13:45.333581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.526  [2024-12-13 19:13:45.333611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.526  [2024-12-13 19:13:45.338076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.526  [2024-12-13 19:13:45.338238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.526  [2024-12-13 19:13:45.338267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.526  [2024-12-13 19:13:45.342570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.526  [2024-12-13 19:13:45.342850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.526  [2024-12-13 19:13:45.342888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.785  [2024-12-13 19:13:45.346979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.785  [2024-12-13 19:13:45.347237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.785  [2024-12-13 19:13:45.347285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.785  [2024-12-13 19:13:45.351421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.785  [2024-12-13 19:13:45.351707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.785  [2024-12-13 19:13:45.351761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.785  [2024-12-13 19:13:45.355876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.785  [2024-12-13 19:13:45.356091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.785  [2024-12-13 19:13:45.356119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.785  [2024-12-13 19:13:45.360292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.785  [2024-12-13 19:13:45.360607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.785  [2024-12-13 19:13:45.360647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.785  [2024-12-13 19:13:45.364560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.785  [2024-12-13 19:13:45.364766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.785  [2024-12-13 19:13:45.364795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.785  [2024-12-13 19:13:45.369136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.785  [2024-12-13 19:13:45.369358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.785  [2024-12-13 19:13:45.369387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.785  [2024-12-13 19:13:45.373488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.785  [2024-12-13 19:13:45.373697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.785  [2024-12-13 19:13:45.373744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.785  [2024-12-13 19:13:45.377815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.785  [2024-12-13 19:13:45.378026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.785  [2024-12-13 19:13:45.378076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.785  [2024-12-13 19:13:45.382177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.785  [2024-12-13 19:13:45.382399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.785  [2024-12-13 19:13:45.382435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.785  [2024-12-13 19:13:45.386542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.785  [2024-12-13 19:13:45.386798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.785  [2024-12-13 19:13:45.386848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.785  [2024-12-13 19:13:45.390978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.785  [2024-12-13 19:13:45.391184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.785  [2024-12-13 19:13:45.391212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.785  [2024-12-13 19:13:45.395630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.785  [2024-12-13 19:13:45.395881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.785  [2024-12-13 19:13:45.395919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.785  [2024-12-13 19:13:45.400078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.785  [2024-12-13 19:13:45.400315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.785  [2024-12-13 19:13:45.400345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.785  [2024-12-13 19:13:45.404466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.785  [2024-12-13 19:13:45.404792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.785  [2024-12-13 19:13:45.404831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.785  [2024-12-13 19:13:45.408854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.786  [2024-12-13 19:13:45.409062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.786  [2024-12-13 19:13:45.409090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.786  [2024-12-13 19:13:45.413126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.786  [2024-12-13 19:13:45.413393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.786  [2024-12-13 19:13:45.413436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.786  [2024-12-13 19:13:45.417575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.786  [2024-12-13 19:13:45.417852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.786  [2024-12-13 19:13:45.417882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.786  [2024-12-13 19:13:45.422219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.786  [2024-12-13 19:13:45.422498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.786  [2024-12-13 19:13:45.422536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.786  [2024-12-13 19:13:45.426788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.786  [2024-12-13 19:13:45.426916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.786  [2024-12-13 19:13:45.426944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.786  [2024-12-13 19:13:45.431340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.786  [2024-12-13 19:13:45.431520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.786  [2024-12-13 19:13:45.431549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.786  [2024-12-13 19:13:45.435681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.786  [2024-12-13 19:13:45.435817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.786  [2024-12-13 19:13:45.435845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.786  [2024-12-13 19:13:45.440177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.786  [2024-12-13 19:13:45.440299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.786  [2024-12-13 19:13:45.440327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.786  [2024-12-13 19:13:45.444556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.786  [2024-12-13 19:13:45.444742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.786  [2024-12-13 19:13:45.444770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.786  [2024-12-13 19:13:45.448957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.786  [2024-12-13 19:13:45.449252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.786  [2024-12-13 19:13:45.449299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.786  [2024-12-13 19:13:45.453329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.786  [2024-12-13 19:13:45.453692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.786  [2024-12-13 19:13:45.453743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.786  [2024-12-13 19:13:45.457642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.786  [2024-12-13 19:13:45.457886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.786  [2024-12-13 19:13:45.457917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.786  [2024-12-13 19:13:45.462067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.786  [2024-12-13 19:13:45.462267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.786  [2024-12-13 19:13:45.462298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.786  [2024-12-13 19:13:45.466413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.786  [2024-12-13 19:13:45.466751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.786  [2024-12-13 19:13:45.466794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.786  [2024-12-13 19:13:45.470740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.786  [2024-12-13 19:13:45.471007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.786  [2024-12-13 19:13:45.471056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.786  [2024-12-13 19:13:45.475053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.786  [2024-12-13 19:13:45.475174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.786  [2024-12-13 19:13:45.475203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.786  [2024-12-13 19:13:45.479314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.786  [2024-12-13 19:13:45.479552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.786  [2024-12-13 19:13:45.479598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.786  [2024-12-13 19:13:45.483565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.786  [2024-12-13 19:13:45.483957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.786  [2024-12-13 19:13:45.483995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:13.786  [2024-12-13 19:13:45.487907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.786  [2024-12-13 19:13:45.488192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.786  [2024-12-13 19:13:45.488243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:13.786  [2024-12-13 19:13:45.492235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.786  [2024-12-13 19:13:45.492538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.786  [2024-12-13 19:13:45.492576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:13.786  [2024-12-13 19:13:45.496516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2289ad0) with pdu=0x200016eff3c8
00:29:13.786  [2024-12-13 19:13:45.496763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:13.786  [2024-12-13 19:13:45.496816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:13.786       7502.50 IOPS,   937.81 MiB/s
00:29:13.786                                                                                                  Latency(us)
00:29:13.786  
[2024-12-13T19:13:45.610Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:29:13.786  Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072)
00:29:13.786  	 nvme0n1             :       2.00    7495.85     936.98       0.00     0.00    2129.27    1467.11    6791.91
00:29:13.786  
[2024-12-13T19:13:45.610Z]  ===================================================================================================================
00:29:13.786  
[2024-12-13T19:13:45.610Z]  Total                       :               7495.85     936.98       0.00     0.00    2129.27    1467.11    6791.91
00:29:13.786  {
00:29:13.786    "results": [
00:29:13.786      {
00:29:13.786        "job": "nvme0n1",
00:29:13.786        "core_mask": "0x2",
00:29:13.786        "workload": "randwrite",
00:29:13.786        "status": "finished",
00:29:13.786        "queue_depth": 16,
00:29:13.786        "io_size": 131072,
00:29:13.786        "runtime": 2.004576,
00:29:13.786        "iops": 7495.849496352346,
00:29:13.786        "mibps": 936.9811870440433,
00:29:13.786        "io_failed": 0,
00:29:13.786        "io_timeout": 0,
00:29:13.786        "avg_latency_us": 2129.2689955592127,
00:29:13.786        "min_latency_us": 1467.1127272727272,
00:29:13.786        "max_latency_us": 6791.912727272727
00:29:13.786      }
00:29:13.786    ],
00:29:13.786    "core_count": 1
00:29:13.786  }
00:29:13.786    19:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1
00:29:13.786    19:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1
00:29:13.786    19:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1
00:29:13.786    19:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0]
00:29:13.786  			| .driver_specific
00:29:13.786  			| .nvme_error
00:29:13.786  			| .status_code
00:29:13.786  			| .command_transient_transport_error'
00:29:14.045   19:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 485 > 0 ))
00:29:14.045   19:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 116530
00:29:14.045   19:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 116530 ']'
00:29:14.045   19:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 116530
00:29:14.045    19:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname
00:29:14.045   19:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:29:14.045    19:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116530
00:29:14.045   19:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:29:14.045   19:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:29:14.045  killing process with pid 116530
00:29:14.045   19:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116530'
00:29:14.045   19:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 116530
00:29:14.045  Received shutdown signal, test time was about 2.000000 seconds
00:29:14.045  
00:29:14.045                                                                                                  Latency(us)
00:29:14.045  
[2024-12-13T19:13:45.869Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:29:14.045  
[2024-12-13T19:13:45.869Z]  ===================================================================================================================
00:29:14.045  
[2024-12-13T19:13:45.869Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:29:14.045   19:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 116530
00:29:14.304   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 116237
00:29:14.305   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 116237 ']'
00:29:14.305   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 116237
00:29:14.305    19:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname
00:29:14.305   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:29:14.305    19:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116237
00:29:14.305   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:29:14.305   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:29:14.305  killing process with pid 116237
00:29:14.305   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116237'
00:29:14.305   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 116237
00:29:14.305   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 116237
00:29:14.563  
00:29:14.563  real	0m17.806s
00:29:14.563  user	0m33.481s
00:29:14.563  sys	0m4.872s
00:29:14.563   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable
00:29:14.563   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:29:14.563  ************************************
00:29:14.563  END TEST nvmf_digest_error
00:29:14.563  ************************************
00:29:14.563   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT
00:29:14.563   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini
00:29:14.563   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup
00:29:14.563   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20}
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:29:14.822  rmmod nvme_tcp
00:29:14.822  rmmod nvme_fabrics
00:29:14.822  rmmod nvme_keyring
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 116237 ']'
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 116237
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 116237 ']'
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 116237
00:29:14.822  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (116237) - No such process
00:29:14.822  Process with pid 116237 is not found
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 116237 is not found'
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:29:14.822   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:29:15.081   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:29:15.081   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns
00:29:15.081   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:29:15.081   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:29:15.081    19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:29:15.081   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0
00:29:15.081  
00:29:15.081  real	0m35.463s
00:29:15.081  user	1m5.448s
00:29:15.081  sys	0m9.881s
00:29:15.081   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable
00:29:15.081   19:13:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x
00:29:15.081  ************************************
00:29:15.081  END TEST nvmf_digest
00:29:15.081  ************************************
00:29:15.081   19:13:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]]
00:29:15.081   19:13:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]]
00:29:15.081   19:13:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp
00:29:15.081   19:13:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:29:15.081   19:13:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:29:15.081   19:13:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:29:15.081  ************************************
00:29:15.081  START TEST nvmf_mdns_discovery
00:29:15.081  ************************************
00:29:15.081   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp
00:29:15.081  * Looking for test storage...
00:29:15.081  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:29:15.081    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:29:15.081     19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:29:15.081     19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # lcov --version
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-:
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-:
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<'
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 ))
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:29:15.340     19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1
00:29:15.340     19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1
00:29:15.340     19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:29:15.340     19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1
00:29:15.340     19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2
00:29:15.340     19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2
00:29:15.340     19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:29:15.340     19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:29:15.340  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:15.340  		--rc genhtml_branch_coverage=1
00:29:15.340  		--rc genhtml_function_coverage=1
00:29:15.340  		--rc genhtml_legend=1
00:29:15.340  		--rc geninfo_all_blocks=1
00:29:15.340  		--rc geninfo_unexecuted_blocks=1
00:29:15.340  		
00:29:15.340  		'
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:29:15.340  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:15.340  		--rc genhtml_branch_coverage=1
00:29:15.340  		--rc genhtml_function_coverage=1
00:29:15.340  		--rc genhtml_legend=1
00:29:15.340  		--rc geninfo_all_blocks=1
00:29:15.340  		--rc geninfo_unexecuted_blocks=1
00:29:15.340  		
00:29:15.340  		'
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:29:15.340  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:15.340  		--rc genhtml_branch_coverage=1
00:29:15.340  		--rc genhtml_function_coverage=1
00:29:15.340  		--rc genhtml_legend=1
00:29:15.340  		--rc geninfo_all_blocks=1
00:29:15.340  		--rc geninfo_unexecuted_blocks=1
00:29:15.340  		
00:29:15.340  		'
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:29:15.340  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:15.340  		--rc genhtml_branch_coverage=1
00:29:15.340  		--rc genhtml_function_coverage=1
00:29:15.340  		--rc genhtml_legend=1
00:29:15.340  		--rc geninfo_all_blocks=1
00:29:15.340  		--rc geninfo_unexecuted_blocks=1
00:29:15.340  		
00:29:15.340  		'
00:29:15.340   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:29:15.340     19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:29:15.340     19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:29:15.340     19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob
00:29:15.340     19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:29:15.340     19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:29:15.340     19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:29:15.340      19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:15.340      19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:15.340      19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:15.340      19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH
00:29:15.340      19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:29:15.340  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:29:15.340    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0
00:29:15.340   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address
00:29:15.340   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009
00:29:15.340   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery
00:29:15.340   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode
00:29:15.340   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@476 -- # prepare_net_devs
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:29:15.341    19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:29:15.341  Cannot find device "nvmf_init_br"
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true
00:29:15.341   19:13:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:29:15.341  Cannot find device "nvmf_init_br2"
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:29:15.341  Cannot find device "nvmf_tgt_br"
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # true
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:29:15.341  Cannot find device "nvmf_tgt_br2"
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # true
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:29:15.341  Cannot find device "nvmf_init_br"
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:29:15.341  Cannot find device "nvmf_init_br2"
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:29:15.341  Cannot find device "nvmf_tgt_br"
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # true
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:29:15.341  Cannot find device "nvmf_tgt_br2"
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # true
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:29:15.341  Cannot find device "nvmf_br"
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # true
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:29:15.341  Cannot find device "nvmf_init_if"
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # true
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:29:15.341  Cannot find device "nvmf_init_if2"
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # true
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:29:15.341  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # true
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:29:15.341  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # true
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:29:15.341   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:29:15.599   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:29:15.599  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:29:15.599  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms
00:29:15.599  
00:29:15.599  --- 10.0.0.3 ping statistics ---
00:29:15.599  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:29:15.599  rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:29:15.600  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:29:15.600  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms
00:29:15.600  
00:29:15.600  --- 10.0.0.4 ping statistics ---
00:29:15.600  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:29:15.600  rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:29:15.600  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:29:15.600  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms
00:29:15.600  
00:29:15.600  --- 10.0.0.1 ping statistics ---
00:29:15.600  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:29:15.600  rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:29:15.600  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:29:15.600  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms
00:29:15.600  
00:29:15.600  --- 10.0.0.2 ping statistics ---
00:29:15.600  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:29:15.600  rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@461 -- # return 0
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@509 -- # nvmfpid=116882
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@510 -- # waitforlisten 116882
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 116882 ']'
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:29:15.600  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:29:15.600   19:13:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:15.858  [2024-12-13 19:13:47.471579] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:29:15.858  [2024-12-13 19:13:47.471697] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:29:15.858  [2024-12-13 19:13:47.627115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:29:15.858  [2024-12-13 19:13:47.671380] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:29:15.858  [2024-12-13 19:13:47.671456] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:29:15.858  [2024-12-13 19:13:47.671471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:29:15.858  [2024-12-13 19:13:47.671482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:29:15.858  [2024-12-13 19:13:47.671492] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:29:15.858  [2024-12-13 19:13:47.671974] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:29:16.792   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:29:16.792   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0
00:29:16.792   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:29:16.792   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@732 -- # xtrace_disable
00:29:16.792   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:16.792   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:29:16.792   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address
00:29:16.792   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:16.792   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:16.792   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:16.792   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init
00:29:16.792   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:16.792   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:16.792   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:16.792   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:29:16.792   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:16.792   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:16.792  [2024-12-13 19:13:48.613543] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:17.050  [2024-12-13 19:13:48.621699] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 ***
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:17.050  null0
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:17.050  null1
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:17.050  null2
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:17.050  null3
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=116932
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 116932 /tmp/host.sock
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 116932 ']'
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:29:17.050  Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...'
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:29:17.050   19:13:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:17.050  [2024-12-13 19:13:48.720702] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:29:17.050  [2024-12-13 19:13:48.720804] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116932 ]
00:29:17.050  [2024-12-13 19:13:48.869652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:29:17.309  [2024-12-13 19:13:48.908922] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:29:17.309   19:13:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:29:17.309   19:13:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0
00:29:17.309   19:13:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM
00:29:17.309   19:13:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT
00:29:17.309   19:13:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill
00:29:17.567   19:13:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=116947
00:29:17.567   19:13:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1
00:29:17.567    19:13:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no'
00:29:17.567   19:13:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63
00:29:17.567  Process 1057 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid)
00:29:17.567  Found user 'avahi' (UID 70) and group 'avahi' (GID 70).
00:29:17.567  Successfully dropped root privileges.
00:29:17.567  avahi-daemon 0.8 starting up.
00:29:17.567  WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
00:29:18.501  Successfully called chroot().
00:29:18.501  Successfully dropped remaining capabilities.
00:29:18.501  No service file found in /etc/avahi/services.
00:29:18.501  Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4.
00:29:18.501  New relevant interface nvmf_tgt_if2.IPv4 for mDNS.
00:29:18.501  Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3.
00:29:18.501  New relevant interface nvmf_tgt_if.IPv4 for mDNS.
00:29:18.501  Network interface enumeration completed.
00:29:18.501  Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*.
00:29:18.501  Registering new address record for 10.0.0.4 on nvmf_tgt_if2.IPv4.
00:29:18.501  Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*.
00:29:18.501  Registering new address record for 10.0.0.3 on nvmf_tgt_if.IPv4.
00:29:18.501  Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 2195655431.
00:29:18.501   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme
00:29:18.501   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:18.501   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:18.501   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:18.501   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test
00:29:18.501   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:18.501   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:18.501   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:18.501   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name'
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:18.501   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]]
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name'
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:18.501   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]]
00:29:18.501   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0
00:29:18.501   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:18.501   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:18.501   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name'
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:18.501    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]]
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name'
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]]
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name'
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:18.761  [2024-12-13 19:13:50.465174] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]]
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name'
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs
00:29:18.761    19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]]
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:18.761  [2024-12-13 19:13:50.534122] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:18.761   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:19.020   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:19.020   19:13:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5
00:29:19.585  [2024-12-13 19:13:51.365173] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW
00:29:20.151  [2024-12-13 19:13:51.765184] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local'
00:29:20.151  [2024-12-13 19:13:51.765206] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: 	fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4)
00:29:20.151  	TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"
00:29:20.151  	cookie is 0
00:29:20.151  	is_local: 1
00:29:20.151  	our_own: 0
00:29:20.151  	wide_area: 0
00:29:20.151  	multicast: 1
00:29:20.151  	cached: 1
00:29:20.152  [2024-12-13 19:13:51.865177] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local'
00:29:20.152  [2024-12-13 19:13:51.865197] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: 	fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3)
00:29:20.152  	TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"
00:29:20.152  	cookie is 0
00:29:20.152  	is_local: 1
00:29:20.152  	our_own: 0
00:29:20.152  	wide_area: 0
00:29:20.152  	multicast: 1
00:29:20.152  	cached: 1
00:29:21.114  [2024-12-13 19:13:52.766057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:21.114  [2024-12-13 19:13:52.766114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc671f0 with addr=10.0.0.4, port=8009
00:29:21.114  [2024-12-13 19:13:52.766144] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:29:21.114  [2024-12-13 19:13:52.766159] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:29:21.114  [2024-12-13 19:13:52.766168] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect
00:29:21.114  [2024-12-13 19:13:52.877945] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached
00:29:21.114  [2024-12-13 19:13:52.878129] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected
00:29:21.114  [2024-12-13 19:13:52.878160] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command
00:29:21.372  [2024-12-13 19:13:52.964048] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem mdns1_nvme0
00:29:21.372  [2024-12-13 19:13:53.018427] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420
00:29:21.372  [2024-12-13 19:13:53.019082] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xc9ef10:1 started.
00:29:21.372  [2024-12-13 19:13:53.020803] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done
00:29:21.372  [2024-12-13 19:13:53.020823] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again
00:29:21.372  [2024-12-13 19:13:53.026256] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xc9ef10 was disconnected and freed. delete nvme_qpair.
00:29:22.305  [2024-12-13 19:13:53.765941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:22.305  [2024-12-13 19:13:53.766161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc9ed10 with addr=10.0.0.4, port=8009
00:29:22.305  [2024-12-13 19:13:53.766310] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:29:22.305  [2024-12-13 19:13:53.766412] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:29:22.305  [2024-12-13 19:13:53.766452] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect
00:29:23.240  [2024-12-13 19:13:54.765934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:23.240  [2024-12-13 19:13:54.766124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc87a20 with addr=10.0.0.4, port=8009
00:29:23.240  [2024-12-13 19:13:54.766273] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:29:23.240  [2024-12-13 19:13:54.766405] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:29:23.240  [2024-12-13 19:13:54.766445] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect
00:29:23.805   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found'
00:29:23.805   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1
00:29:23.805   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4
00:29:23.805   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009
00:29:23.805   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found'
00:29:23.805   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output
00:29:23.805    19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p
00:29:23.805   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local
00:29:23.805  +;(null);IPv4;spdk0;_nvme-disc._tcp;local
00:29:23.805  =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"
00:29:23.805  =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"'
00:29:23.805   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines
00:29:23.805   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}"
00:29:23.805   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]]
00:29:23.805   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}"
00:29:23.805   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]]
00:29:23.805   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}"
00:29:23.805   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]]
00:29:23.805   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}"
00:29:23.805   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]]
00:29:23.805   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]]
00:29:23.805   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0
00:29:23.805   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009
00:29:23.805   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:23.806   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:23.806  [2024-12-13 19:13:55.624155] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 ***
00:29:23.806  [2024-12-13 19:13:55.627300] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer
00:29:23.806  [2024-12-13 19:13:55.627486] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command
00:29:24.064   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:24.064   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420
00:29:24.064   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:24.064   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:24.064  [2024-12-13 19:13:55.631914] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 ***
00:29:24.064  [2024-12-13 19:13:55.632301] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer
00:29:24.064   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:24.064   19:13:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1
00:29:24.064  [2024-12-13 19:13:55.764459] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again
00:29:24.064  [2024-12-13 19:13:55.764632] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command
00:29:24.064  [2024-12-13 19:13:55.770215] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached
00:29:24.064  [2024-12-13 19:13:55.770365] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected
00:29:24.064  [2024-12-13 19:13:55.770414] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command
00:29:24.064  [2024-12-13 19:13:55.850630] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again
00:29:24.064  [2024-12-13 19:13:55.858323] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns0_nvme0
00:29:24.323  [2024-12-13 19:13:55.918766] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr was created to 10.0.0.4:4420
00:29:24.323  [2024-12-13 19:13:55.919462] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0xc9be60:1 started.
00:29:24.323  [2024-12-13 19:13:55.920944] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done
00:29:24.323  [2024-12-13 19:13:55.921088] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again
00:29:24.323  [2024-12-13 19:13:55.928527] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0xc9be60 was disconnected and freed. delete nvme_qpair.
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output
00:29:24.890    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local
00:29:24.890  +;(null);IPv4;spdk0;_nvme-disc._tcp;local
00:29:24.890  +;(null);IPv4;spdk1;_nvme-disc._tcp;local
00:29:24.890  +;(null);IPv4;spdk0;_nvme-disc._tcp;local
00:29:24.890  =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"
00:29:24.890  =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"
00:29:24.890  =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"
00:29:24.890  =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"'
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}"
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]]
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]]
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}"
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]]
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}"
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]]
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]]
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}"
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]]
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}"
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]]
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]]
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]]
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]]
00:29:24.890   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0
00:29:24.890    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs
00:29:24.890    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info
00:29:24.890    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort
00:29:24.890    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:24.891    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name'
00:29:24.891    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:24.891    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs
00:29:24.891    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:25.150   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]]
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name'
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:25.150  [2024-12-13 19:13:56.765194] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local'
00:29:25.150  [2024-12-13 19:13:56.765421] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: 	fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3)
00:29:25.150  	TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"
00:29:25.150  	cookie is 0
00:29:25.150  	is_local: 1
00:29:25.150  	our_own: 0
00:29:25.150  	wide_area: 0
00:29:25.150  	multicast: 1
00:29:25.150  	cached: 1
00:29:25.150  [2024-12-13 19:13:56.765438] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009
00:29:25.150   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]]
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name'
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:25.150   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]]
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name'
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:25.150   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]]
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:25.150   19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]]
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:25.150    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs
00:29:25.409    19:13:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:25.409   19:13:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]]
00:29:25.409   19:13:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count
00:29:25.409    19:13:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0
00:29:25.409    19:13:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length'
00:29:25.409    19:13:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:25.409    19:13:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:25.409    19:13:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:25.409  [2024-12-13 19:13:57.065194] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local'
00:29:25.409  [2024-12-13 19:13:57.065217] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: 	fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4)
00:29:25.409  	TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"
00:29:25.409  	cookie is 0
00:29:25.409  	is_local: 1
00:29:25.409  	our_own: 0
00:29:25.409  	wide_area: 0
00:29:25.409  	multicast: 1
00:29:25.409  	cached: 1
00:29:25.409  [2024-12-13 19:13:57.065265] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009
00:29:25.409   19:13:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2
00:29:25.409   19:13:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2
00:29:25.409   19:13:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]]
00:29:25.409   19:13:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1
00:29:25.409   19:13:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:25.409   19:13:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:25.410   19:13:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:25.410   19:13:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3
00:29:25.410   19:13:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:25.410   19:13:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:25.410  [2024-12-13 19:13:57.082960] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xc9e330:1 started.
00:29:25.410   19:13:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:25.410   19:13:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1
00:29:25.410  [2024-12-13 19:13:57.089016] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xc9e330 was disconnected and freed. delete nvme_qpair.
00:29:25.410  [2024-12-13 19:13:57.090717] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0xc9b4b0:1 started.
00:29:25.410  [2024-12-13 19:13:57.098542] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0xc9b4b0 was disconnected and freed. delete nvme_qpair.
00:29:26.345    19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list
00:29:26.345    19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:29:26.345    19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name'
00:29:26.345    19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:26.345    19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort
00:29:26.345    19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:26.345    19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs
00:29:26.346    19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:26.346   19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]]
00:29:26.346   19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count
00:29:26.346    19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2
00:29:26.346    19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length'
00:29:26.346    19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:26.346    19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:26.346    19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:26.604   19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2
00:29:26.604   19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4
00:29:26.604   19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]]
00:29:26.604   19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421
00:29:26.604   19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:26.604   19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:26.604  [2024-12-13 19:13:58.206094] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 ***
00:29:26.604  [2024-12-13 19:13:58.207397] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer
00:29:26.604  [2024-12-13 19:13:58.207430] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command
00:29:26.604  [2024-12-13 19:13:58.207461] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer
00:29:26.604  [2024-12-13 19:13:58.207474] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command
00:29:26.604   19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:26.604   19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421
00:29:26.604   19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:26.604   19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:26.604  [2024-12-13 19:13:58.213845] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 ***
00:29:26.604  [2024-12-13 19:13:58.214411] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer
00:29:26.604  [2024-12-13 19:13:58.214505] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer
00:29:26.604   19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:26.605   19:13:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1
00:29:26.605  [2024-12-13 19:13:58.344527] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns0_nvme0
00:29:26.605  [2024-12-13 19:13:58.346523] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for mdns1_nvme0
00:29:26.605  [2024-12-13 19:13:58.403921] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 2] ctrlr was created to 10.0.0.4:4421
00:29:26.605  [2024-12-13 19:13:58.403974] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done
00:29:26.605  [2024-12-13 19:13:58.403983] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again
00:29:26.605  [2024-12-13 19:13:58.403988] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again
00:29:26.605  [2024-12-13 19:13:58.404003] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command
00:29:26.605  [2024-12-13 19:13:58.404791] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421
00:29:26.605  [2024-12-13 19:13:58.404819] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done
00:29:26.605  [2024-12-13 19:13:58.404826] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again
00:29:26.605  [2024-12-13 19:13:58.404831] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again
00:29:26.605  [2024-12-13 19:13:58.404845] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command
00:29:26.863  [2024-12-13 19:13:58.449593] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again
00:29:26.863  [2024-12-13 19:13:58.449610] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again
00:29:26.863  [2024-12-13 19:13:58.450600] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again
00:29:26.863  [2024-12-13 19:13:58.450609] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again
00:29:27.431    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names
00:29:27.432    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:29:27.432    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name'
00:29:27.432    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:27.432    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:27.432    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort
00:29:27.432    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs
00:29:27.432    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:27.690   19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]]
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name'
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:27.690   19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]]
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:27.690   19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]]
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:27.690   19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]]
00:29:27.690   19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length'
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:27.690    19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:27.951   19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0
00:29:27.951   19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4
00:29:27.951   19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]]
00:29:27.951   19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420
00:29:27.951   19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:27.951   19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:27.951  [2024-12-13 19:13:59.531092] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer
00:29:27.951  [2024-12-13 19:13:59.531121] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command
00:29:27.951  [2024-12-13 19:13:59.531151] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer
00:29:27.951  [2024-12-13 19:13:59.531164] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command
00:29:27.951  [2024-12-13 19:13:59.532184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:27.951  [2024-12-13 19:13:59.532227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:27.951  [2024-12-13 19:13:59.532240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:27.951  [2024-12-13 19:13:59.532264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:27.951  [2024-12-13 19:13:59.532274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:27.951  [2024-12-13 19:13:59.532283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:27.951  [2024-12-13 19:13:59.532292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:27.951  [2024-12-13 19:13:59.532300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:27.951  [2024-12-13 19:13:59.532309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6b160 is same with the state(6) to be set
00:29:27.951   19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:27.951   19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420
00:29:27.951   19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:27.951   19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:27.951  [2024-12-13 19:13:59.539099] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer
00:29:27.951  [2024-12-13 19:13:59.539355] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer
00:29:27.952  [2024-12-13 19:13:59.539425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:27.952  [2024-12-13 19:13:59.539454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:27.952  [2024-12-13 19:13:59.539468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:27.952  [2024-12-13 19:13:59.539477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:27.952  [2024-12-13 19:13:59.539487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:27.952  [2024-12-13 19:13:59.539496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:27.952  [2024-12-13 19:13:59.539506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:27.952  [2024-12-13 19:13:59.539515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:27.952  [2024-12-13 19:13:59.539524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7030 is same with the state(6) to be set
00:29:27.952  [2024-12-13 19:13:59.542144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6b160 (9): Bad file descriptor
00:29:27.952   19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:27.952   19:13:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1
00:29:27.952  [2024-12-13 19:13:59.549385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca7030 (9): Bad file descriptor
00:29:27.952  [2024-12-13 19:13:59.552161] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:29:27.952  [2024-12-13 19:13:59.552179] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:29:27.952  [2024-12-13 19:13:59.552184] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:29:27.952  [2024-12-13 19:13:59.552189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:29:27.952  [2024-12-13 19:13:59.552254] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:29:27.952  [2024-12-13 19:13:59.552340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:27.952  [2024-12-13 19:13:59.552377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6b160 with addr=10.0.0.3, port=4420
00:29:27.952  [2024-12-13 19:13:59.552387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6b160 is same with the state(6) to be set
00:29:27.952  [2024-12-13 19:13:59.552403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6b160 (9): Bad file descriptor
00:29:27.952  [2024-12-13 19:13:59.552417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:29:27.952  [2024-12-13 19:13:59.552425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:29:27.952  [2024-12-13 19:13:59.552435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:29:27.952  [2024-12-13 19:13:59.552443] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:29:27.952  [2024-12-13 19:13:59.552448] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:29:27.952  [2024-12-13 19:13:59.552453] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:29:27.952  [2024-12-13 19:13:59.559390] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset.
00:29:27.952  [2024-12-13 19:13:59.559408] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted.
00:29:27.952  [2024-12-13 19:13:59.559413] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr.
00:29:27.952  [2024-12-13 19:13:59.559417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller
00:29:27.952  [2024-12-13 19:13:59.559443] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr.
00:29:27.952  [2024-12-13 19:13:59.559489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:27.952  [2024-12-13 19:13:59.559505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca7030 with addr=10.0.0.4, port=4420
00:29:27.952  [2024-12-13 19:13:59.559514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7030 is same with the state(6) to be set
00:29:27.952  [2024-12-13 19:13:59.559527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca7030 (9): Bad file descriptor
00:29:27.952  [2024-12-13 19:13:59.559539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state
00:29:27.952  [2024-12-13 19:13:59.559550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed
00:29:27.952  [2024-12-13 19:13:59.559557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state.
00:29:27.952  [2024-12-13 19:13:59.559563] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected.
00:29:27.952  [2024-12-13 19:13:59.559568] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets.
00:29:27.952  [2024-12-13 19:13:59.559572] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed.
00:29:27.952  [2024-12-13 19:13:59.562224] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:29:27.952  [2024-12-13 19:13:59.562241] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:29:27.952  [2024-12-13 19:13:59.562246] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:29:27.952  [2024-12-13 19:13:59.562265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:29:27.952  [2024-12-13 19:13:59.562284] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:29:27.952  [2024-12-13 19:13:59.562327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:27.952  [2024-12-13 19:13:59.562344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6b160 with addr=10.0.0.3, port=4420
00:29:27.952  [2024-12-13 19:13:59.562352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6b160 is same with the state(6) to be set
00:29:27.952  [2024-12-13 19:13:59.562365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6b160 (9): Bad file descriptor
00:29:27.952  [2024-12-13 19:13:59.562377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:29:27.952  [2024-12-13 19:13:59.562384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:29:27.952  [2024-12-13 19:13:59.562391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:29:27.952  [2024-12-13 19:13:59.562397] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:29:27.952  [2024-12-13 19:13:59.562402] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:29:27.952  [2024-12-13 19:13:59.562406] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:29:27.952  [2024-12-13 19:13:59.569450] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset.
00:29:27.952  [2024-12-13 19:13:59.569467] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted.
00:29:27.952  [2024-12-13 19:13:59.569472] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr.
00:29:27.952  [2024-12-13 19:13:59.569476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller
00:29:27.952  [2024-12-13 19:13:59.569515] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr.
00:29:27.952  [2024-12-13 19:13:59.569559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:27.952  [2024-12-13 19:13:59.569576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca7030 with addr=10.0.0.4, port=4420
00:29:27.952  [2024-12-13 19:13:59.569585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7030 is same with the state(6) to be set
00:29:27.952  [2024-12-13 19:13:59.569598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca7030 (9): Bad file descriptor
00:29:27.952  [2024-12-13 19:13:59.569610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state
00:29:27.952  [2024-12-13 19:13:59.569628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed
00:29:27.952  [2024-12-13 19:13:59.569635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state.
00:29:27.952  [2024-12-13 19:13:59.569641] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected.
00:29:27.952  [2024-12-13 19:13:59.569646] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets.
00:29:27.952  [2024-12-13 19:13:59.569649] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed.
00:29:27.952  [2024-12-13 19:13:59.572292] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:29:27.952  [2024-12-13 19:13:59.572308] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:29:27.952  [2024-12-13 19:13:59.572313] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:29:27.952  [2024-12-13 19:13:59.572317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:29:27.952  [2024-12-13 19:13:59.572340] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:29:27.952  [2024-12-13 19:13:59.572382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:27.952  [2024-12-13 19:13:59.572397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6b160 with addr=10.0.0.3, port=4420
00:29:27.952  [2024-12-13 19:13:59.572405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6b160 is same with the state(6) to be set
00:29:27.952  [2024-12-13 19:13:59.572418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6b160 (9): Bad file descriptor
00:29:27.952  [2024-12-13 19:13:59.572429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:29:27.952  [2024-12-13 19:13:59.572437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:29:27.952  [2024-12-13 19:13:59.572444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:29:27.952  [2024-12-13 19:13:59.572450] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:29:27.953  [2024-12-13 19:13:59.572454] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:29:27.953  [2024-12-13 19:13:59.572458] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:29:27.953  [2024-12-13 19:13:59.579522] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset.
00:29:27.953  [2024-12-13 19:13:59.579540] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted.
00:29:27.953  [2024-12-13 19:13:59.579545] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr.
00:29:27.953  [2024-12-13 19:13:59.579549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller
00:29:27.953  [2024-12-13 19:13:59.579574] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr.
00:29:27.953  [2024-12-13 19:13:59.579615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:27.953  [2024-12-13 19:13:59.579631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca7030 with addr=10.0.0.4, port=4420
00:29:27.953  [2024-12-13 19:13:59.579639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7030 is same with the state(6) to be set
00:29:27.953  [2024-12-13 19:13:59.579652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca7030 (9): Bad file descriptor
00:29:27.953  [2024-12-13 19:13:59.579663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state
00:29:27.953  [2024-12-13 19:13:59.579670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed
00:29:27.953  [2024-12-13 19:13:59.579678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state.
00:29:27.953  [2024-12-13 19:13:59.579684] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected.
00:29:27.953  [2024-12-13 19:13:59.579688] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets.
00:29:27.953  [2024-12-13 19:13:59.579692] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed.
00:29:27.953  [2024-12-13 19:13:59.582348] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:29:27.953  [2024-12-13 19:13:59.582365] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:29:27.953  [2024-12-13 19:13:59.582370] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:29:27.953  [2024-12-13 19:13:59.582374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:29:27.953  [2024-12-13 19:13:59.582398] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:29:27.953  [2024-12-13 19:13:59.582439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:27.953  [2024-12-13 19:13:59.582455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6b160 with addr=10.0.0.3, port=4420
00:29:27.953  [2024-12-13 19:13:59.582463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6b160 is same with the state(6) to be set
00:29:27.953  [2024-12-13 19:13:59.582476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6b160 (9): Bad file descriptor
00:29:27.953  [2024-12-13 19:13:59.582487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:29:27.953  [2024-12-13 19:13:59.582494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:29:27.953  [2024-12-13 19:13:59.582501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:29:27.953  [2024-12-13 19:13:59.582508] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:29:27.953  [2024-12-13 19:13:59.582512] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:29:27.953  [2024-12-13 19:13:59.582516] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:29:27.953  [2024-12-13 19:13:59.589582] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset.
00:29:27.953  [2024-12-13 19:13:59.589744] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted.
00:29:27.953  [2024-12-13 19:13:59.589755] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr.
00:29:27.953  [2024-12-13 19:13:59.589761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller
00:29:27.953  [2024-12-13 19:13:59.589807] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr.
00:29:27.953  [2024-12-13 19:13:59.589866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:27.953  [2024-12-13 19:13:59.589886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca7030 with addr=10.0.0.4, port=4420
00:29:27.953  [2024-12-13 19:13:59.589897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7030 is same with the state(6) to be set
00:29:27.953  [2024-12-13 19:13:59.589913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca7030 (9): Bad file descriptor
00:29:27.953  [2024-12-13 19:13:59.589926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state
00:29:27.953  [2024-12-13 19:13:59.589934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed
00:29:27.953  [2024-12-13 19:13:59.589943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state.
00:29:27.953  [2024-12-13 19:13:59.589950] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected.
00:29:27.953  [2024-12-13 19:13:59.589955] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets.
00:29:27.953  [2024-12-13 19:13:59.589966] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed.
00:29:27.953  [2024-12-13 19:13:59.592408] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:29:27.953  [2024-12-13 19:13:59.592425] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:29:27.953  [2024-12-13 19:13:59.592430] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:29:27.953  [2024-12-13 19:13:59.592434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:29:27.953  [2024-12-13 19:13:59.592458] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:29:27.953  [2024-12-13 19:13:59.592501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:27.953  [2024-12-13 19:13:59.592517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6b160 with addr=10.0.0.3, port=4420
00:29:27.953  [2024-12-13 19:13:59.592526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6b160 is same with the state(6) to be set
00:29:27.953  [2024-12-13 19:13:59.592538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6b160 (9): Bad file descriptor
00:29:27.953  [2024-12-13 19:13:59.592550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:29:27.953  [2024-12-13 19:13:59.592557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:29:27.953  [2024-12-13 19:13:59.592564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:29:27.953  [2024-12-13 19:13:59.592571] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:29:27.953  [2024-12-13 19:13:59.592575] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:29:27.953  [2024-12-13 19:13:59.592579] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:29:27.953  [2024-12-13 19:13:59.599814] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset.
00:29:27.953  [2024-12-13 19:13:59.599951] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted.
00:29:27.953  [2024-12-13 19:13:59.599961] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr.
00:29:27.953  [2024-12-13 19:13:59.599966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller
00:29:27.953  [2024-12-13 19:13:59.600014] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr.
00:29:27.953  [2024-12-13 19:13:59.600070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:27.953  [2024-12-13 19:13:59.600088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca7030 with addr=10.0.0.4, port=4420
00:29:27.953  [2024-12-13 19:13:59.600106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7030 is same with the state(6) to be set
00:29:27.953  [2024-12-13 19:13:59.600121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca7030 (9): Bad file descriptor
00:29:27.953  [2024-12-13 19:13:59.600153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state
00:29:27.953  [2024-12-13 19:13:59.600163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed
00:29:27.953  [2024-12-13 19:13:59.600171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state.
00:29:27.953  [2024-12-13 19:13:59.600178] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected.
00:29:27.953  [2024-12-13 19:13:59.600184] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets.
00:29:27.953  [2024-12-13 19:13:59.600188] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed.
00:29:27.953  [2024-12-13 19:13:59.602467] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:29:27.953  [2024-12-13 19:13:59.602485] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:29:27.953  [2024-12-13 19:13:59.602489] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:29:27.953  [2024-12-13 19:13:59.602493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:29:27.953  [2024-12-13 19:13:59.602517] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:29:27.953  [2024-12-13 19:13:59.602560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:27.953  [2024-12-13 19:13:59.602576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6b160 with addr=10.0.0.3, port=4420
00:29:27.953  [2024-12-13 19:13:59.602585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6b160 is same with the state(6) to be set
00:29:27.953  [2024-12-13 19:13:59.602597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6b160 (9): Bad file descriptor
00:29:27.953  [2024-12-13 19:13:59.602608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:29:27.953  [2024-12-13 19:13:59.602616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:29:27.953  [2024-12-13 19:13:59.602623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:29:27.953  [2024-12-13 19:13:59.602630] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:29:27.954  [2024-12-13 19:13:59.602634] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:29:27.954  [2024-12-13 19:13:59.602638] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:29:27.954  [2024-12-13 19:13:59.610024] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset.
00:29:27.954  [2024-12-13 19:13:59.610047] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted.
00:29:27.954  [2024-12-13 19:13:59.610052] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr.
00:29:27.954  [2024-12-13 19:13:59.610057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller
00:29:27.954  [2024-12-13 19:13:59.610084] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr.
00:29:27.954  [2024-12-13 19:13:59.610128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:27.954  [2024-12-13 19:13:59.610145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca7030 with addr=10.0.0.4, port=4420
00:29:27.954  [2024-12-13 19:13:59.610154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7030 is same with the state(6) to be set
00:29:27.954  [2024-12-13 19:13:59.610167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca7030 (9): Bad file descriptor
00:29:27.954  [2024-12-13 19:13:59.610191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state
00:29:27.954  [2024-12-13 19:13:59.610200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed
00:29:27.954  [2024-12-13 19:13:59.610207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state.
00:29:27.954  [2024-12-13 19:13:59.610214] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected.
00:29:27.954  [2024-12-13 19:13:59.610228] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets.
00:29:27.954  [2024-12-13 19:13:59.610233] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed.
00:29:27.954  [2024-12-13 19:13:59.612525] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:29:27.954  [2024-12-13 19:13:59.612545] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:29:27.954  [2024-12-13 19:13:59.612550] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:29:27.954  [2024-12-13 19:13:59.612554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:29:27.954  [2024-12-13 19:13:59.612571] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:29:27.954  [2024-12-13 19:13:59.612611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:27.954  [2024-12-13 19:13:59.612628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6b160 with addr=10.0.0.3, port=4420
00:29:27.954  [2024-12-13 19:13:59.612637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6b160 is same with the state(6) to be set
00:29:27.954  [2024-12-13 19:13:59.612649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6b160 (9): Bad file descriptor
00:29:27.954  [2024-12-13 19:13:59.612660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:29:27.954  [2024-12-13 19:13:59.612667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:29:27.954  [2024-12-13 19:13:59.612674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:29:27.954  [2024-12-13 19:13:59.612680] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:29:27.954  [2024-12-13 19:13:59.612685] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:29:27.954  [2024-12-13 19:13:59.612689] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:29:27.954  [2024-12-13 19:13:59.620092] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset.
00:29:27.954  [2024-12-13 19:13:59.620114] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted.
00:29:27.954  [2024-12-13 19:13:59.620119] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr.
00:29:27.954  [2024-12-13 19:13:59.620123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller
00:29:27.954  [2024-12-13 19:13:59.620141] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr.
00:29:27.954  [2024-12-13 19:13:59.620185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:27.954  [2024-12-13 19:13:59.620201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca7030 with addr=10.0.0.4, port=4420
00:29:27.954  [2024-12-13 19:13:59.620210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7030 is same with the state(6) to be set
00:29:27.954  [2024-12-13 19:13:59.620235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca7030 (9): Bad file descriptor
00:29:27.954  [2024-12-13 19:13:59.620279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state
00:29:27.954  [2024-12-13 19:13:59.620289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed
00:29:27.954  [2024-12-13 19:13:59.620297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state.
00:29:27.954  [2024-12-13 19:13:59.620303] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected.
00:29:27.954  [2024-12-13 19:13:59.620308] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets.
00:29:27.954  [2024-12-13 19:13:59.620312] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed.
00:29:27.954  [2024-12-13 19:13:59.622577] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:29:27.954  [2024-12-13 19:13:59.622597] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:29:27.954  [2024-12-13 19:13:59.622602] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:29:27.954  [2024-12-13 19:13:59.622606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:29:27.954  [2024-12-13 19:13:59.622623] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:29:27.954  [2024-12-13 19:13:59.622663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:27.954  [2024-12-13 19:13:59.622679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6b160 with addr=10.0.0.3, port=4420
00:29:27.954  [2024-12-13 19:13:59.622688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6b160 is same with the state(6) to be set
00:29:27.954  [2024-12-13 19:13:59.622701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6b160 (9): Bad file descriptor
00:29:27.954  [2024-12-13 19:13:59.622712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:29:27.954  [2024-12-13 19:13:59.622719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:29:27.954  [2024-12-13 19:13:59.622726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:29:27.954  [2024-12-13 19:13:59.622733] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:29:27.954  [2024-12-13 19:13:59.622737] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:29:27.954  [2024-12-13 19:13:59.622741] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:29:27.954  [2024-12-13 19:13:59.630150] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset.
00:29:27.954  [2024-12-13 19:13:59.630176] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted.
00:29:27.954  [2024-12-13 19:13:59.630181] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr.
00:29:27.954  [2024-12-13 19:13:59.630185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller
00:29:27.954  [2024-12-13 19:13:59.630203] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr.
00:29:27.954  [2024-12-13 19:13:59.630257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:27.954  [2024-12-13 19:13:59.630275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca7030 with addr=10.0.0.4, port=4420
00:29:27.954  [2024-12-13 19:13:59.630284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7030 is same with the state(6) to be set
00:29:27.954  [2024-12-13 19:13:59.630298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca7030 (9): Bad file descriptor
00:29:27.954  [2024-12-13 19:13:59.630322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state
00:29:27.954  [2024-12-13 19:13:59.630331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed
00:29:27.954  [2024-12-13 19:13:59.630338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state.
00:29:27.954  [2024-12-13 19:13:59.630345] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected.
00:29:27.954  [2024-12-13 19:13:59.630349] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets.
00:29:27.954  [2024-12-13 19:13:59.630353] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed.
00:29:27.954  [2024-12-13 19:13:59.632632] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:29:27.954  [2024-12-13 19:13:59.632652] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:29:27.954  [2024-12-13 19:13:59.632657] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:29:27.954  [2024-12-13 19:13:59.632661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:29:27.954  [2024-12-13 19:13:59.632678] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:29:27.954  [2024-12-13 19:13:59.632718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:27.954  [2024-12-13 19:13:59.632734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6b160 with addr=10.0.0.3, port=4420
00:29:27.954  [2024-12-13 19:13:59.632742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6b160 is same with the state(6) to be set
00:29:27.954  [2024-12-13 19:13:59.632755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6b160 (9): Bad file descriptor
00:29:27.954  [2024-12-13 19:13:59.632766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:29:27.955  [2024-12-13 19:13:59.632773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:29:27.955  [2024-12-13 19:13:59.632781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:29:27.955  [2024-12-13 19:13:59.632787] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:29:27.955  [2024-12-13 19:13:59.632791] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:29:27.955  [2024-12-13 19:13:59.632795] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:29:27.955  [2024-12-13 19:13:59.640212] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset.
00:29:27.955  [2024-12-13 19:13:59.640241] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted.
00:29:27.955  [2024-12-13 19:13:59.640247] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr.
00:29:27.955  [2024-12-13 19:13:59.640251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller
00:29:27.955  [2024-12-13 19:13:59.640268] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr.
00:29:27.955  [2024-12-13 19:13:59.640309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:27.955  [2024-12-13 19:13:59.640325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca7030 with addr=10.0.0.4, port=4420
00:29:27.955  [2024-12-13 19:13:59.640334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7030 is same with the state(6) to be set
00:29:27.955  [2024-12-13 19:13:59.640347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca7030 (9): Bad file descriptor
00:29:27.955  [2024-12-13 19:13:59.640374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state
00:29:27.955  [2024-12-13 19:13:59.640383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed
00:29:27.955  [2024-12-13 19:13:59.640390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state.
00:29:27.955  [2024-12-13 19:13:59.640396] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected.
00:29:27.955  [2024-12-13 19:13:59.640401] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets.
00:29:27.955  [2024-12-13 19:13:59.640405] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed.
00:29:27.955  [2024-12-13 19:13:59.642687] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:29:27.955  [2024-12-13 19:13:59.642708] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:29:27.955  [2024-12-13 19:13:59.642713] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:29:27.955  [2024-12-13 19:13:59.642717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:29:27.955  [2024-12-13 19:13:59.642733] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:29:27.955  [2024-12-13 19:13:59.642773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:27.955  [2024-12-13 19:13:59.642788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6b160 with addr=10.0.0.3, port=4420
00:29:27.955  [2024-12-13 19:13:59.642796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6b160 is same with the state(6) to be set
00:29:27.955  [2024-12-13 19:13:59.642809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6b160 (9): Bad file descriptor
00:29:27.955  [2024-12-13 19:13:59.642820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:29:27.955  [2024-12-13 19:13:59.642827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:29:27.955  [2024-12-13 19:13:59.642834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:29:27.955  [2024-12-13 19:13:59.642840] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:29:27.955  [2024-12-13 19:13:59.642844] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:29:27.955  [2024-12-13 19:13:59.642848] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:29:27.955  [2024-12-13 19:13:59.650277] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset.
00:29:27.955  [2024-12-13 19:13:59.650298] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted.
00:29:27.955  [2024-12-13 19:13:59.650303] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr.
00:29:27.955  [2024-12-13 19:13:59.650307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller
00:29:27.955  [2024-12-13 19:13:59.650324] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr.
00:29:27.955  [2024-12-13 19:13:59.650364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:27.955  [2024-12-13 19:13:59.650380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca7030 with addr=10.0.0.4, port=4420
00:29:27.955  [2024-12-13 19:13:59.650388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7030 is same with the state(6) to be set
00:29:27.955  [2024-12-13 19:13:59.650401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca7030 (9): Bad file descriptor
00:29:27.955  [2024-12-13 19:13:59.650425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state
00:29:27.955  [2024-12-13 19:13:59.650441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed
00:29:27.955  [2024-12-13 19:13:59.650448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state.
00:29:27.955  [2024-12-13 19:13:59.650454] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected.
00:29:27.955  [2024-12-13 19:13:59.650459] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets.
00:29:27.955  [2024-12-13 19:13:59.650463] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed.
00:29:27.955  [2024-12-13 19:13:59.652742] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:29:27.955  [2024-12-13 19:13:59.652754] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:29:27.955  [2024-12-13 19:13:59.652758] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:29:27.955  [2024-12-13 19:13:59.652762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:29:27.955  [2024-12-13 19:13:59.652782] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:29:27.955  [2024-12-13 19:13:59.652822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:27.955  [2024-12-13 19:13:59.652838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6b160 with addr=10.0.0.3, port=4420
00:29:27.955  [2024-12-13 19:13:59.652846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6b160 is same with the state(6) to be set
00:29:27.955  [2024-12-13 19:13:59.652858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6b160 (9): Bad file descriptor
00:29:27.955  [2024-12-13 19:13:59.652869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:29:27.955  [2024-12-13 19:13:59.652876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:29:27.955  [2024-12-13 19:13:59.652883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:29:27.955  [2024-12-13 19:13:59.652889] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:29:27.955  [2024-12-13 19:13:59.652893] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:29:27.955  [2024-12-13 19:13:59.652897] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:29:27.955  [2024-12-13 19:13:59.660332] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset.
00:29:27.955  [2024-12-13 19:13:59.660352] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted.
00:29:27.955  [2024-12-13 19:13:59.660358] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr.
00:29:27.955  [2024-12-13 19:13:59.660362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller
00:29:27.955  [2024-12-13 19:13:59.660381] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr.
00:29:27.955  [2024-12-13 19:13:59.660422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:27.955  [2024-12-13 19:13:59.660437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca7030 with addr=10.0.0.4, port=4420
00:29:27.955  [2024-12-13 19:13:59.660445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7030 is same with the state(6) to be set
00:29:27.955  [2024-12-13 19:13:59.660458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca7030 (9): Bad file descriptor
00:29:27.955  [2024-12-13 19:13:59.660483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state
00:29:27.955  [2024-12-13 19:13:59.660491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed
00:29:27.955  [2024-12-13 19:13:59.660498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state.
00:29:27.955  [2024-12-13 19:13:59.660520] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected.
00:29:27.955  [2024-12-13 19:13:59.660524] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets.
00:29:27.955  [2024-12-13 19:13:59.660543] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed.
00:29:27.955  [2024-12-13 19:13:59.662790] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:29:27.955  [2024-12-13 19:13:59.662808] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:29:27.955  [2024-12-13 19:13:59.662813] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:29:27.955  [2024-12-13 19:13:59.662817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:29:27.955  [2024-12-13 19:13:59.662834] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:29:27.955  [2024-12-13 19:13:59.662873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:29:27.955  [2024-12-13 19:13:59.662889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6b160 with addr=10.0.0.3, port=4420
00:29:27.955  [2024-12-13 19:13:59.662898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6b160 is same with the state(6) to be set
00:29:27.956  [2024-12-13 19:13:59.662910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6b160 (9): Bad file descriptor
00:29:27.956  [2024-12-13 19:13:59.662922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:29:27.956  [2024-12-13 19:13:59.662928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:29:27.956  [2024-12-13 19:13:59.662935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:29:27.956  [2024-12-13 19:13:59.662942] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:29:27.956  [2024-12-13 19:13:59.662946] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:29:27.956  [2024-12-13 19:13:59.662950] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:29:27.956  [2024-12-13 19:13:59.669487] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found
00:29:27.956  [2024-12-13 19:13:59.669512] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again
00:29:27.956  [2024-12-13 19:13:59.669529] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command
00:29:27.956  [2024-12-13 19:13:59.669558] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found
00:29:27.956  [2024-12-13 19:13:59.669571] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again
00:29:27.956  [2024-12-13 19:13:59.669582] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command
00:29:27.956  [2024-12-13 19:13:59.755545] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again
00:29:27.956  [2024-12-13 19:13:59.755629] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name'
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:28.889   19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]]
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name'
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:28.889   19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]]
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n
00:29:28.889    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:29.149   19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]]
00:29:29.149    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0
00:29:29.149    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0
00:29:29.149    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:29.149    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:29:29.149    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:29.149    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n
00:29:29.149    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs
00:29:29.149    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:29.149   19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]]
00:29:29.149   19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count
00:29:29.149    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4
00:29:29.149    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:29.149    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:29.149    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length'
00:29:29.149    19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:29.149   19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0
00:29:29.149   19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4
00:29:29.149   19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]]
00:29:29.149   19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns
00:29:29.149   19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:29.149   19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:29.149   19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:29.149   19:14:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1
00:29:29.149  [2024-12-13 19:14:00.865185] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp
00:29:30.084    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs
00:29:30.084    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info
00:29:30.084    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:30.084    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:30.084    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name'
00:29:30.084    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort
00:29:30.084    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs
00:29:30.084    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:30.084   19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]]
00:29:30.084    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names
00:29:30.084    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:29:30.084    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name'
00:29:30.084    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:30.084    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:30.084    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort
00:29:30.084    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs
00:29:30.343    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:30.343   19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]]
00:29:30.343    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list
00:29:30.343    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:29:30.343    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name'
00:29:30.343    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:30.343    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort
00:29:30.343    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:30.344    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs
00:29:30.344    19:14:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]]
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count
00:29:30.344    19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4
00:29:30.344    19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length'
00:29:30.344    19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:30.344    19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:30.344    19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]]
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:29:30.344    19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:30.344  [2024-12-13 19:14:02.082230] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns
00:29:30.344  2024/12/13 19:14:02 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists
00:29:30.344  request:
00:29:30.344  {
00:29:30.344  "method": "bdev_nvme_start_mdns_discovery",
00:29:30.344  "params": {
00:29:30.344  "name": "mdns",
00:29:30.344  "svcname": "_nvme-disc._http",
00:29:30.344  "hostnqn": "nqn.2021-12.io.spdk:test"
00:29:30.344  }
00:29:30.344  }
00:29:30.344  Got JSON-RPC error response
00:29:30.344  GoRPCClient: error on JSON-RPC call
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:29:30.344   19:14:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5
00:29:30.911  [2024-12-13 19:14:02.670798] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED
00:29:31.169  [2024-12-13 19:14:02.770795] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW
00:29:31.169  [2024-12-13 19:14:02.870801] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local'
00:29:31.169  [2024-12-13 19:14:02.870822] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: 	fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4)
00:29:31.169  	TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"
00:29:31.169  	cookie is 0
00:29:31.169  	is_local: 1
00:29:31.169  	our_own: 0
00:29:31.169  	wide_area: 0
00:29:31.169  	multicast: 1
00:29:31.169  	cached: 1
00:29:31.169  [2024-12-13 19:14:02.970811] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local'
00:29:31.169  [2024-12-13 19:14:02.970833] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: 	fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4)
00:29:31.169  	TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"
00:29:31.169  	cookie is 0
00:29:31.169  	is_local: 1
00:29:31.169  	our_own: 0
00:29:31.169  	wide_area: 0
00:29:31.169  	multicast: 1
00:29:31.169  	cached: 1
00:29:31.169  [2024-12-13 19:14:02.970842] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009
00:29:31.428  [2024-12-13 19:14:03.070803] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local'
00:29:31.428  [2024-12-13 19:14:03.070825] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: 	fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3)
00:29:31.428  	TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"
00:29:31.428  	cookie is 0
00:29:31.428  	is_local: 1
00:29:31.428  	our_own: 0
00:29:31.428  	wide_area: 0
00:29:31.428  	multicast: 1
00:29:31.428  	cached: 1
00:29:31.428  [2024-12-13 19:14:03.170801] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local'
00:29:31.428  [2024-12-13 19:14:03.170822] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: 	fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3)
00:29:31.428  	TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"
00:29:31.428  	cookie is 0
00:29:31.428  	is_local: 1
00:29:31.428  	our_own: 0
00:29:31.428  	wide_area: 0
00:29:31.428  	multicast: 1
00:29:31.428  	cached: 1
00:29:31.428  [2024-12-13 19:14:03.170847] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009
00:29:32.364  [2024-12-13 19:14:03.882033] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached
00:29:32.364  [2024-12-13 19:14:03.882058] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected
00:29:32.364  [2024-12-13 19:14:03.882074] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command
00:29:32.364  [2024-12-13 19:14:03.968114] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0
00:29:32.364  [2024-12-13 19:14:04.026389] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] ctrlr was created to 10.0.0.4:4421
00:29:32.364  [2024-12-13 19:14:04.026998] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] Connecting qpair 0xc7cd30:1 started.
00:29:32.364  [2024-12-13 19:14:04.028659] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done
00:29:32.364  [2024-12-13 19:14:04.028699] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again
00:29:32.364  [2024-12-13 19:14:04.030636] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] qpair 0xc7cd30 was disconnected and freed. delete nvme_qpair.
00:29:32.364  [2024-12-13 19:14:04.081706] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached
00:29:32.364  [2024-12-13 19:14:04.081745] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected
00:29:32.364  [2024-12-13 19:14:04.081776] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command
00:29:32.364  [2024-12-13 19:14:04.167794] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem mdns1_nvme0
00:29:32.623  [2024-12-13 19:14:04.226040] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421
00:29:32.623  [2024-12-13 19:14:04.226519] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xca9d20:1 started.
00:29:32.623  [2024-12-13 19:14:04.227800] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done
00:29:32.623  [2024-12-13 19:14:04.227823] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again
00:29:32.623  [2024-12-13 19:14:04.230360] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xca9d20 was disconnected and freed. delete nvme_qpair.
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name'
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]]
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name'
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]]
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name'
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]]
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:35.963  [2024-12-13 19:14:07.277418] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp
00:29:35.963  2024/12/13 19:14:07 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists
00:29:35.963  request:
00:29:35.963  {
00:29:35.963  "method": "bdev_nvme_start_mdns_discovery",
00:29:35.963  "params": {
00:29:35.963  "name": "cdc",
00:29:35.963  "svcname": "_nvme-disc._tcp",
00:29:35.963  "hostnqn": "nqn.2021-12.io.spdk:test"
00:29:35.963  }
00:29:35.963  }
00:29:35.963  Got JSON-RPC error response
00:29:35.963  GoRPCClient: error on JSON-RPC call
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name'
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]]
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name'
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]]
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 found
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output
00:29:35.963    19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local
00:29:35.963  +;(null);IPv4;spdk0;_nvme-disc._tcp;local
00:29:35.963  +;(null);IPv4;spdk1;_nvme-disc._tcp;local
00:29:35.963  +;(null);IPv4;spdk0;_nvme-disc._tcp;local
00:29:35.963  =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"
00:29:35.963  =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"
00:29:35.963  =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"
00:29:35.963  =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"'
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}"
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]]
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]]
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}"
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]]
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}"
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]]
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]]
00:29:35.963   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}"
00:29:35.964   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]]
00:29:35.964   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}"
00:29:35.964   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]]
00:29:35.964   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]]
00:29:35.964   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}"
00:29:35.964   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]]
00:29:35.964   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}"
00:29:35.964   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]]
00:29:35.964   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]]
00:29:35.964   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]]
00:29:35.964   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]]
00:29:35.964   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0
00:29:35.964   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009
00:29:35.964   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:35.964   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:35.964   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:35.964   19:14:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1
00:29:35.964  [2024-12-13 19:14:07.470800] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 'not found'
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found'
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output
00:29:36.900    19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local
00:29:36.900  +;(null);IPv4;spdk0;_nvme-disc._tcp;local
00:29:36.900  =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"
00:29:36.900  =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"'
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}"
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]]
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}"
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]]
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}"
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]]
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}"
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]]
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]]
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 116932
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 116932
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 116947
00:29:36.900  Got SIGTERM, quitting. 19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini
00:29:36.900  
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@516 -- # nvmfcleanup
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync
00:29:36.900  Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4.
00:29:36.900  Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3.
00:29:36.900  avahi-daemon 0.8 exiting.
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20}
00:29:36.900   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:29:36.900  rmmod nvme_tcp
00:29:37.159  rmmod nvme_fabrics
00:29:37.159  rmmod nvme_keyring
00:29:37.159   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:29:37.159   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e
00:29:37.159   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0
00:29:37.159   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@517 -- # '[' -n 116882 ']'
00:29:37.159   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@518 -- # killprocess 116882
00:29:37.159   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # '[' -z 116882 ']'
00:29:37.159   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # kill -0 116882
00:29:37.159    19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # uname
00:29:37.159   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:29:37.159    19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116882
00:29:37.159  killing process with pid 116882
00:29:37.159   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:29:37.159   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:29:37.159   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116882'
00:29:37.159   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@973 -- # kill 116882
00:29:37.159   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@978 -- # wait 116882
00:29:37.159   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:29:37.159   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:29:37.159   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:29:37.159   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@297 -- # iptr
00:29:37.159   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-save
00:29:37.159   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:29:37.159   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-restore
00:29:37.159   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:29:37.159   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:29:37.159   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:29:37.417   19:14:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:29:37.417   19:14:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:29:37.417   19:14:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:29:37.417   19:14:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:29:37.417   19:14:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:29:37.417   19:14:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:29:37.417   19:14:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:29:37.417   19:14:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:29:37.417   19:14:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:29:37.417   19:14:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:29:37.417   19:14:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:29:37.417   19:14:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:29:37.417   19:14:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns
00:29:37.417   19:14:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:29:37.417   19:14:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:29:37.417    19:14:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:29:37.417   19:14:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@300 -- # return 0
00:29:37.417  
00:29:37.417  real	0m22.427s
00:29:37.417  user	0m43.200s
00:29:37.417  sys	0m2.189s
00:29:37.417  ************************************
00:29:37.417  END TEST nvmf_mdns_discovery
00:29:37.417  ************************************
00:29:37.417   19:14:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable
00:29:37.417   19:14:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x
00:29:37.677   19:14:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]]
00:29:37.677   19:14:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp
00:29:37.677   19:14:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:29:37.677   19:14:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:29:37.677   19:14:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:29:37.677  ************************************
00:29:37.677  START TEST nvmf_host_multipath
00:29:37.677  ************************************
00:29:37.677   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp
00:29:37.677  * Looking for test storage...
00:29:37.677  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:29:37.677     19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version
00:29:37.677     19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-:
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-:
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<'
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 ))
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:29:37.677     19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1
00:29:37.677     19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1
00:29:37.677     19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:29:37.677     19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1
00:29:37.677     19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2
00:29:37.677     19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2
00:29:37.677     19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:29:37.677     19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:29:37.677  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:37.677  		--rc genhtml_branch_coverage=1
00:29:37.677  		--rc genhtml_function_coverage=1
00:29:37.677  		--rc genhtml_legend=1
00:29:37.677  		--rc geninfo_all_blocks=1
00:29:37.677  		--rc geninfo_unexecuted_blocks=1
00:29:37.677  		
00:29:37.677  		'
00:29:37.677    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:29:37.677  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:37.677  		--rc genhtml_branch_coverage=1
00:29:37.677  		--rc genhtml_function_coverage=1
00:29:37.677  		--rc genhtml_legend=1
00:29:37.678  		--rc geninfo_all_blocks=1
00:29:37.678  		--rc geninfo_unexecuted_blocks=1
00:29:37.678  		
00:29:37.678  		'
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:29:37.678  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:37.678  		--rc genhtml_branch_coverage=1
00:29:37.678  		--rc genhtml_function_coverage=1
00:29:37.678  		--rc genhtml_legend=1
00:29:37.678  		--rc geninfo_all_blocks=1
00:29:37.678  		--rc geninfo_unexecuted_blocks=1
00:29:37.678  		
00:29:37.678  		'
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:29:37.678  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:37.678  		--rc genhtml_branch_coverage=1
00:29:37.678  		--rc genhtml_function_coverage=1
00:29:37.678  		--rc genhtml_legend=1
00:29:37.678  		--rc geninfo_all_blocks=1
00:29:37.678  		--rc geninfo_unexecuted_blocks=1
00:29:37.678  		
00:29:37.678  		'
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:29:37.678     19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:29:37.678     19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:29:37.678     19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob
00:29:37.678     19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:29:37.678     19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:29:37.678     19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:29:37.678      19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:37.678      19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:37.678      19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:37.678      19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH
00:29:37.678      19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:29:37.678  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:29:37.678    19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:29:37.678   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:29:37.937  Cannot find device "nvmf_init_br"
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:29:37.937  Cannot find device "nvmf_init_br2"
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:29:37.937  Cannot find device "nvmf_tgt_br"
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:29:37.937  Cannot find device "nvmf_tgt_br2"
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:29:37.937  Cannot find device "nvmf_init_br"
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:29:37.937  Cannot find device "nvmf_init_br2"
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:29:37.937  Cannot find device "nvmf_tgt_br"
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:29:37.937  Cannot find device "nvmf_tgt_br2"
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:29:37.937  Cannot find device "nvmf_br"
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:29:37.937  Cannot find device "nvmf_init_if"
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:29:37.937  Cannot find device "nvmf_init_if2"
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:29:37.937  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:29:37.937  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:29:37.937   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:29:37.938   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:29:37.938   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:29:37.938   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:29:37.938   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:29:37.938   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:29:38.198   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:29:38.198   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:29:38.198   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:29:38.198   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:29:38.198   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:29:38.198   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:29:38.198   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:29:38.198   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:29:38.198   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:29:38.198   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:29:38.198   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:29:38.198   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:29:38.198   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:29:38.198   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:29:38.198   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:29:38.199  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:29:38.199  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms
00:29:38.199  
00:29:38.199  --- 10.0.0.3 ping statistics ---
00:29:38.199  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:29:38.199  rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms
00:29:38.199   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:29:38.199  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:29:38.199  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms
00:29:38.199  
00:29:38.199  --- 10.0.0.4 ping statistics ---
00:29:38.199  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:29:38.199  rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms
00:29:38.199   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:29:38.199  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:29:38.199  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms
00:29:38.199  
00:29:38.199  --- 10.0.0.1 ping statistics ---
00:29:38.199  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:29:38.199  rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms
00:29:38.199   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:29:38.199  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:29:38.199  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms
00:29:38.199  
00:29:38.199  --- 10.0.0.2 ping statistics ---
00:29:38.199  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:29:38.199  rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms
00:29:38.199   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:29:38.199   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0
00:29:38.199   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:29:38.199   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:29:38.199   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:29:38.199   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:29:38.199   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:29:38.199   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:29:38.199   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:29:38.199   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3
00:29:38.199   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:29:38.199   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable
00:29:38.199   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x
00:29:38.199   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=117589
00:29:38.199   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 117589
00:29:38.199   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3
00:29:38.199   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 117589 ']'
00:29:38.200   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:29:38.200   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100
00:29:38.200  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:29:38.200   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:29:38.200   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable
00:29:38.200   19:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x
00:29:38.200  [2024-12-13 19:14:09.943877] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:29:38.200  [2024-12-13 19:14:09.943956] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:29:38.461  [2024-12-13 19:14:10.083010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:29:38.461  [2024-12-13 19:14:10.128073] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:29:38.461  [2024-12-13 19:14:10.128140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:29:38.461  [2024-12-13 19:14:10.128151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:29:38.461  [2024-12-13 19:14:10.128158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:29:38.461  [2024-12-13 19:14:10.128164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:29:38.461  [2024-12-13 19:14:10.129492] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:29:38.461  [2024-12-13 19:14:10.129507] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:29:38.461   19:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:29:38.461   19:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0
00:29:38.461   19:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:29:38.461   19:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable
00:29:38.461   19:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x
00:29:38.720   19:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:29:38.720   19:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=117589
00:29:38.720   19:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:29:38.979  [2024-12-13 19:14:10.619575] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:29:38.979   19:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0
00:29:39.238  Malloc0
00:29:39.238   19:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2
00:29:39.497   19:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:29:39.755   19:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:29:40.014  [2024-12-13 19:14:11.640794] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:29:40.014   19:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421
00:29:40.273  [2024-12-13 19:14:11.852959] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 ***
00:29:40.273   19:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=117674
00:29:40.273   19:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90
00:29:40.273   19:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:29:40.273   19:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 117674 /var/tmp/bdevperf.sock
00:29:40.273   19:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 117674 ']'
00:29:40.273   19:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:29:40.273   19:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100
00:29:40.273  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:29:40.273   19:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:29:40.273   19:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable
00:29:40.273   19:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x
00:29:41.209   19:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:29:41.209   19:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0
00:29:41.209   19:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1
00:29:41.467   19:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10
00:29:41.725  Nvme0n1
00:29:41.984   19:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10
00:29:42.243  Nvme0n1
00:29:42.243   19:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests
00:29:42.243   19:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1
00:29:43.178   19:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized
00:29:43.178   19:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized
00:29:43.437   19:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized
00:29:43.695   19:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421
00:29:43.695   19:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=117761
00:29:43.695   19:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 117589 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt
00:29:43.695   19:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6
00:29:50.259    19:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1
00:29:50.259    19:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid'
00:29:50.259   19:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421
00:29:50.259   19:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:29:50.259  Attaching 4 probes...
00:29:50.259  @path[10.0.0.3, 4421]: 19178
00:29:50.259  @path[10.0.0.3, 4421]: 19235
00:29:50.259  @path[10.0.0.3, 4421]: 19173
00:29:50.259  @path[10.0.0.3, 4421]: 19059
00:29:50.259  @path[10.0.0.3, 4421]: 19461
00:29:50.259    19:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1
00:29:50.259    19:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}'
00:29:50.259    19:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p
00:29:50.259   19:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421
00:29:50.259   19:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]]
00:29:50.259   19:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]]
00:29:50.259   19:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 117761
00:29:50.259   19:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:29:50.259   19:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible
00:29:50.259   19:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized
00:29:50.259   19:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible
00:29:50.518   19:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420
00:29:50.518   19:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=117893
00:29:50.518   19:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 117589 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt
00:29:50.518   19:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6
00:29:57.083    19:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1
00:29:57.083    19:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid'
00:29:57.083   19:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420
00:29:57.083   19:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:29:57.083  Attaching 4 probes...
00:29:57.083  @path[10.0.0.3, 4420]: 20640
00:29:57.083  @path[10.0.0.3, 4420]: 20604
00:29:57.083  @path[10.0.0.3, 4420]: 20618
00:29:57.083  @path[10.0.0.3, 4420]: 20333
00:29:57.083  @path[10.0.0.3, 4420]: 20767
00:29:57.083    19:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1
00:29:57.083    19:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p
00:29:57.083    19:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}'
00:29:57.083   19:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420
00:29:57.083   19:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]]
00:29:57.083   19:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]]
00:29:57.083   19:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 117893
00:29:57.083   19:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:29:57.083   19:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized
00:29:57.083   19:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible
00:29:57.083   19:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized
00:29:57.342   19:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421
00:29:57.342   19:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=118028
00:29:57.342   19:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 117589 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt
00:29:57.342   19:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6
00:30:03.907    19:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1
00:30:03.907    19:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid'
00:30:03.907   19:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421
00:30:03.907   19:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:30:03.907  Attaching 4 probes...
00:30:03.907  @path[10.0.0.3, 4421]: 15595
00:30:03.907  @path[10.0.0.3, 4421]: 19172
00:30:03.907  @path[10.0.0.3, 4421]: 18988
00:30:03.907  @path[10.0.0.3, 4421]: 19085
00:30:03.907  @path[10.0.0.3, 4421]: 18955
00:30:03.907    19:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1
00:30:03.907    19:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p
00:30:03.907    19:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}'
00:30:03.907   19:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421
00:30:03.907   19:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]]
00:30:03.907   19:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]]
00:30:03.907   19:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 118028
00:30:03.907   19:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:30:03.907   19:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible
00:30:03.907   19:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible
00:30:03.907   19:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible
00:30:04.166   19:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' ''
00:30:04.166   19:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 117589 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt
00:30:04.166   19:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=118154
00:30:04.166   19:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6
00:30:10.728    19:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1
00:30:10.728    19:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid'
00:30:10.728   19:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=
00:30:10.728   19:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:30:10.728  Attaching 4 probes...
00:30:10.728  
00:30:10.728  
00:30:10.728  
00:30:10.728  
00:30:10.728  
00:30:10.728    19:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}'
00:30:10.728    19:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1
00:30:10.728    19:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p
00:30:10.728   19:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=
00:30:10.728   19:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]]
00:30:10.728   19:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]]
00:30:10.728   19:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 118154
00:30:10.728   19:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:30:10.728   19:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized
00:30:10.728   19:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized
00:30:10.728   19:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized
00:30:10.728   19:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421
00:30:10.728   19:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 117589 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt
00:30:10.728   19:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=118285
00:30:10.728   19:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6
00:30:17.293    19:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1
00:30:17.293    19:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid'
00:30:17.293   19:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421
00:30:17.293   19:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:30:17.293  Attaching 4 probes...
00:30:17.293  @path[10.0.0.3, 4421]: 18486
00:30:17.293  @path[10.0.0.3, 4421]: 18818
00:30:17.293  @path[10.0.0.3, 4421]: 18679
00:30:17.293  @path[10.0.0.3, 4421]: 18934
00:30:17.293  @path[10.0.0.3, 4421]: 18805
00:30:17.293    19:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1
00:30:17.293    19:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p
00:30:17.293    19:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}'
00:30:17.293   19:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421
00:30:17.293   19:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]]
00:30:17.293   19:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]]
00:30:17.293   19:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 118285
00:30:17.293   19:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:30:17.293   19:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421
00:30:17.293  [2024-12-13 19:14:49.075683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.075998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.076006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.076014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.076022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.076030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.076039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.076047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.076055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.076062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.076071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.076079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.076086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.076094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.076102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.076110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.076118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.076126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.076134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.076142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.293  [2024-12-13 19:14:49.076150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.294  [2024-12-13 19:14:49.076158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.294  [2024-12-13 19:14:49.076166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.294  [2024-12-13 19:14:49.076174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.294  [2024-12-13 19:14:49.076181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.294  [2024-12-13 19:14:49.076190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.294  [2024-12-13 19:14:49.076199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.294  [2024-12-13 19:14:49.076207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.294  [2024-12-13 19:14:49.076215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23640b0 is same with the state(6) to be set
00:30:17.294   19:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1
00:30:18.670   19:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420
00:30:18.670   19:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=118415
00:30:18.671   19:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 117589 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt
00:30:18.671   19:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6
00:30:25.233    19:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1
00:30:25.233    19:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid'
00:30:25.233   19:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420
00:30:25.233   19:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:30:25.233  Attaching 4 probes...
00:30:25.233  @path[10.0.0.3, 4420]: 19672
00:30:25.233  @path[10.0.0.3, 4420]: 20104
00:30:25.233  @path[10.0.0.3, 4420]: 20298
00:30:25.233  @path[10.0.0.3, 4420]: 19885
00:30:25.233  @path[10.0.0.3, 4420]: 19788
00:30:25.233    19:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1
00:30:25.233    19:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}'
00:30:25.233    19:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p
00:30:25.233   19:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420
00:30:25.233   19:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]]
00:30:25.233   19:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]]
00:30:25.233   19:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 118415
00:30:25.233   19:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:30:25.233   19:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421
00:30:25.233  [2024-12-13 19:14:56.687039] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 ***
00:30:25.233   19:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized
00:30:25.233   19:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6
00:30:31.798   19:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421
00:30:31.798   19:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=118608
00:30:31.798   19:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 117589 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt
00:30:31.798   19:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6
00:30:38.371    19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1
00:30:38.372    19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid'
00:30:38.372   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421
00:30:38.372   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:30:38.372  Attaching 4 probes...
00:30:38.372  @path[10.0.0.3, 4421]: 17750
00:30:38.372  @path[10.0.0.3, 4421]: 17953
00:30:38.372  @path[10.0.0.3, 4421]: 18167
00:30:38.372  @path[10.0.0.3, 4421]: 18030
00:30:38.372  @path[10.0.0.3, 4421]: 17987
00:30:38.372    19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1
00:30:38.372    19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p
00:30:38.372    19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}'
00:30:38.372   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421
00:30:38.372   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]]
00:30:38.372   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]]
00:30:38.372   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 118608
00:30:38.372   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:30:38.372   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 117674
00:30:38.372   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 117674 ']'
00:30:38.372   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 117674
00:30:38.372    19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname
00:30:38.372   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:30:38.372    19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117674
00:30:38.372  killing process with pid 117674
00:30:38.372   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:30:38.372   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:30:38.372   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117674'
00:30:38.372   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 117674
00:30:38.372   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 117674
00:30:38.372  {
00:30:38.372    "results": [
00:30:38.372      {
00:30:38.372        "job": "Nvme0n1",
00:30:38.372        "core_mask": "0x4",
00:30:38.372        "workload": "verify",
00:30:38.372        "status": "terminated",
00:30:38.372        "verify_range": {
00:30:38.372          "start": 0,
00:30:38.372          "length": 16384
00:30:38.372        },
00:30:38.372        "queue_depth": 128,
00:30:38.372        "io_size": 4096,
00:30:38.372        "runtime": 55.341148,
00:30:38.372        "iops": 8179.140049642628,
00:30:38.372        "mibps": 31.949765818916514,
00:30:38.372        "io_failed": 0,
00:30:38.372        "io_timeout": 0,
00:30:38.372        "avg_latency_us": 15624.095620369493,
00:30:38.372        "min_latency_us": 1705.4254545454546,
00:30:38.372        "max_latency_us": 7046430.72
00:30:38.372      }
00:30:38.372    ],
00:30:38.372    "core_count": 1
00:30:38.372  }
00:30:38.372   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 117674
00:30:38.372   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt
00:30:38.372  [2024-12-13 19:14:11.923198] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:30:38.372  [2024-12-13 19:14:11.923310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117674 ]
00:30:38.372  [2024-12-13 19:14:12.060629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:30:38.372  [2024-12-13 19:14:12.105194] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:30:38.372  Running I/O for 90 seconds...
00:30:38.372      10694.00 IOPS,    41.77 MiB/s
[2024-12-13T19:15:10.196Z]     10240.00 IOPS,    40.00 MiB/s
[2024-12-13T19:15:10.196Z]     10101.67 IOPS,    39.46 MiB/s
[2024-12-13T19:15:10.196Z]      9967.50 IOPS,    38.94 MiB/s
[2024-12-13T19:15:10.196Z]      9902.00 IOPS,    38.68 MiB/s
[2024-12-13T19:15:10.196Z]      9835.33 IOPS,    38.42 MiB/s
[2024-12-13T19:15:10.196Z]      9824.71 IOPS,    38.38 MiB/s
[2024-12-13T19:15:10.196Z]      9772.25 IOPS,    38.17 MiB/s
[2024-12-13T19:15:10.196Z] [2024-12-13 19:14:22.154687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.372  [2024-12-13 19:14:22.154738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:30:38.372  [2024-12-13 19:14:22.154803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.372  [2024-12-13 19:14:22.154822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:30:38.372  [2024-12-13 19:14:22.154841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.372  [2024-12-13 19:14:22.154855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:30:38.372  [2024-12-13 19:14:22.154874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.372  [2024-12-13 19:14:22.154887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:30:38.372  [2024-12-13 19:14:22.154905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.372  [2024-12-13 19:14:22.154918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:30:38.372  [2024-12-13 19:14:22.154937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.372  [2024-12-13 19:14:22.154950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:30:38.372  [2024-12-13 19:14:22.154968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.372  [2024-12-13 19:14:22.154981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:30:38.372  [2024-12-13 19:14:22.154999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.372  [2024-12-13 19:14:22.155012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:30:38.372  [2024-12-13 19:14:22.155030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.372  [2024-12-13 19:14:22.155043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:30:38.372  [2024-12-13 19:14:22.155061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.372  [2024-12-13 19:14:22.155074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:30:38.372  [2024-12-13 19:14:22.155116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.372  [2024-12-13 19:14:22.155131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.372  [2024-12-13 19:14:22.155150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.372  [2024-12-13 19:14:22.155163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:30:38.372  [2024-12-13 19:14:22.155181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.372  [2024-12-13 19:14:22.155194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:30:38.372  [2024-12-13 19:14:22.155211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.372  [2024-12-13 19:14:22.155240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:30:38.372  [2024-12-13 19:14:22.155291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.372  [2024-12-13 19:14:22.155308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:30:38.372  [2024-12-13 19:14:22.155330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.372  [2024-12-13 19:14:22.155345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:30:38.372  [2024-12-13 19:14:22.156065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.372  [2024-12-13 19:14:22.156091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:30:38.372  [2024-12-13 19:14:22.156115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.372  [2024-12-13 19:14:22.156140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:30:38.372  [2024-12-13 19:14:22.156158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:112016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.372  [2024-12-13 19:14:22.156171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:30:38.372  [2024-12-13 19:14:22.156190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.372  [2024-12-13 19:14:22.156204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:30:38.372  [2024-12-13 19:14:22.156222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:112032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.372  [2024-12-13 19:14:22.156267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:30:38.372  [2024-12-13 19:14:22.156288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.372  [2024-12-13 19:14:22.156303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.156355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.373  [2024-12-13 19:14:22.156372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.156393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.373  [2024-12-13 19:14:22.156408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.156430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.373  [2024-12-13 19:14:22.156445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.156466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.373  [2024-12-13 19:14:22.156481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.156502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.373  [2024-12-13 19:14:22.156517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.156539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.373  [2024-12-13 19:14:22.156553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.156605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.373  [2024-12-13 19:14:22.156634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.156667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.373  [2024-12-13 19:14:22.156681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.156714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.373  [2024-12-13 19:14:22.156728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.156747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.373  [2024-12-13 19:14:22.156777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.157066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.373  [2024-12-13 19:14:22.157092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.157117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.373  [2024-12-13 19:14:22.157132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.157175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:112144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.373  [2024-12-13 19:14:22.157191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.157212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.373  [2024-12-13 19:14:22.157242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.157279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.373  [2024-12-13 19:14:22.157294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.157315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.373  [2024-12-13 19:14:22.157330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.157351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.373  [2024-12-13 19:14:22.157366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.157405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.373  [2024-12-13 19:14:22.157423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.157446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.373  [2024-12-13 19:14:22.157462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.157483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.373  [2024-12-13 19:14:22.157499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.157520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.373  [2024-12-13 19:14:22.157535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.157557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.373  [2024-12-13 19:14:22.157572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.157605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.373  [2024-12-13 19:14:22.157631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.157651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.373  [2024-12-13 19:14:22.157665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.157685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.373  [2024-12-13 19:14:22.157707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.157755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.373  [2024-12-13 19:14:22.157772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.157793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.373  [2024-12-13 19:14:22.157808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.157829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.373  [2024-12-13 19:14:22.157844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.157865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.373  [2024-12-13 19:14:22.157880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.157902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.373  [2024-12-13 19:14:22.157917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.157939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.373  [2024-12-13 19:14:22.157954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.157975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.373  [2024-12-13 19:14:22.157990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.158011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:111528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.373  [2024-12-13 19:14:22.158049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.158068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.373  [2024-12-13 19:14:22.158082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.158102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.373  [2024-12-13 19:14:22.158116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.158135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.373  [2024-12-13 19:14:22.158149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.158168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.373  [2024-12-13 19:14:22.158189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.158209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.373  [2024-12-13 19:14:22.158223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.158266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.373  [2024-12-13 19:14:22.158297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:30:38.373  [2024-12-13 19:14:22.158320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.374  [2024-12-13 19:14:22.158336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.158357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.374  [2024-12-13 19:14:22.158372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.158394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.374  [2024-12-13 19:14:22.158409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.158430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.374  [2024-12-13 19:14:22.158446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.158467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.374  [2024-12-13 19:14:22.158482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.158503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.374  [2024-12-13 19:14:22.158518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.158540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.374  [2024-12-13 19:14:22.158555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.158590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.374  [2024-12-13 19:14:22.158616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.158636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.374  [2024-12-13 19:14:22.158650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.158670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.374  [2024-12-13 19:14:22.158685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.158727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.374  [2024-12-13 19:14:22.158742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.158762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:111672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.374  [2024-12-13 19:14:22.158775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.158795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.374  [2024-12-13 19:14:22.158808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.158828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.374  [2024-12-13 19:14:22.158842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.158861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.374  [2024-12-13 19:14:22.158875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.158894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.374  [2024-12-13 19:14:22.158909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.158929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.374  [2024-12-13 19:14:22.158942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.158962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.374  [2024-12-13 19:14:22.158976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.158995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.374  [2024-12-13 19:14:22.159009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.159028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:111736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.374  [2024-12-13 19:14:22.159042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.160332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.374  [2024-12-13 19:14:22.160361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.160388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.374  [2024-12-13 19:14:22.160404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.160438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.374  [2024-12-13 19:14:22.160455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.160476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.374  [2024-12-13 19:14:22.160491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.160512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.374  [2024-12-13 19:14:22.160526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.160547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.374  [2024-12-13 19:14:22.160561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.160581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.374  [2024-12-13 19:14:22.160612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.160632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.374  [2024-12-13 19:14:22.160646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.160666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.374  [2024-12-13 19:14:22.160680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.160701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.374  [2024-12-13 19:14:22.160715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.160749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.374  [2024-12-13 19:14:22.160769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.160790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.374  [2024-12-13 19:14:22.160804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.160823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.374  [2024-12-13 19:14:22.160837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.160856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.374  [2024-12-13 19:14:22.160870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.160896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.374  [2024-12-13 19:14:22.160911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.160930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:112312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.374  [2024-12-13 19:14:22.160944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.160963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.374  [2024-12-13 19:14:22.160977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.160996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.374  [2024-12-13 19:14:22.161011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.161031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.374  [2024-12-13 19:14:22.161045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:30:38.374  [2024-12-13 19:14:22.161064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.374  [2024-12-13 19:14:22.161078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:22.161097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.375  [2024-12-13 19:14:22.161111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:22.161130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:112360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.375  [2024-12-13 19:14:22.161144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:22.161164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.375  [2024-12-13 19:14:22.161177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:22.161196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.375  [2024-12-13 19:14:22.161210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:22.161230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.375  [2024-12-13 19:14:22.161273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:22.161298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.375  [2024-12-13 19:14:22.161313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:22.161335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.375  [2024-12-13 19:14:22.161363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:22.161386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.375  [2024-12-13 19:14:22.161402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:22.161423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.375  [2024-12-13 19:14:22.161438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:22.161459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.375  [2024-12-13 19:14:22.161474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:22.161495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.375  [2024-12-13 19:14:22.161510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:22.161531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.375  [2024-12-13 19:14:22.161546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:22.161567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:111808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.375  [2024-12-13 19:14:22.161582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:22.161615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.375  [2024-12-13 19:14:22.161641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:22.161661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:111824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.375  [2024-12-13 19:14:22.161687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:22.161706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.375  [2024-12-13 19:14:22.161742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:22.161775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.375  [2024-12-13 19:14:22.161790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:22.161812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.375  [2024-12-13 19:14:22.161827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:22.161848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.375  [2024-12-13 19:14:22.161870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:22.161892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.375  [2024-12-13 19:14:22.161908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:30:38.375       9764.11 IOPS,    38.14 MiB/s
[2024-12-13T19:15:10.199Z]      9809.40 IOPS,    38.32 MiB/s
[2024-12-13T19:15:10.199Z]      9865.82 IOPS,    38.54 MiB/s
[2024-12-13T19:15:10.199Z]      9905.92 IOPS,    38.69 MiB/s
[2024-12-13T19:15:10.199Z]      9930.00 IOPS,    38.79 MiB/s
[2024-12-13T19:15:10.199Z]      9955.00 IOPS,    38.89 MiB/s
[2024-12-13T19:15:10.199Z] [2024-12-13 19:14:28.702130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.375  [2024-12-13 19:14:28.702185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:28.702287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:122432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.375  [2024-12-13 19:14:28.702311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:28.702335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:122440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.375  [2024-12-13 19:14:28.702352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:28.702374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.375  [2024-12-13 19:14:28.702389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:28.702410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.375  [2024-12-13 19:14:28.702426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:28.702448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:122464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.375  [2024-12-13 19:14:28.702463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:28.702484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:122472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.375  [2024-12-13 19:14:28.702499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:28.702521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:122480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.375  [2024-12-13 19:14:28.702536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:28.702572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.375  [2024-12-13 19:14:28.702617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:28.702636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:122496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.375  [2024-12-13 19:14:28.702649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:28.702689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:122504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.375  [2024-12-13 19:14:28.702704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:28.702722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:122512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.375  [2024-12-13 19:14:28.702735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:28.702753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:122520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.375  [2024-12-13 19:14:28.702766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:28.702784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.375  [2024-12-13 19:14:28.702797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:28.702815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:122536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.375  [2024-12-13 19:14:28.702828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:28.702848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:122544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.375  [2024-12-13 19:14:28.702862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:28.702880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.375  [2024-12-13 19:14:28.702893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:30:38.375  [2024-12-13 19:14:28.702912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.375  [2024-12-13 19:14:28.702925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.702944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:122568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.702957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.702976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:122576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.702989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.703008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:122584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.703021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.703039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.703053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.703071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:122600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.703092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.703112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.703125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.703144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:122616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.703157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.703175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.703189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.703207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:122632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.703221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.703274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:122640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.703289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.703326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:122648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.703342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.703363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.703378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.703399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:122664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.703415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.703437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.703453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.703474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.703489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.703510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.703526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.703713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.703744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.703770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.703785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.703806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.703819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.703839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:122720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.703853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.703873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:122728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.703886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.703907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.703920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.703940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.703953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.703975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.703988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.704010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:122760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.704023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.704043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:122768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.704056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.704077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.704090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.704110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.704123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.704143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:122792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.704156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.704184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:122800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.704198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.704218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.704248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.704272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:122816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.704287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:30:38.376  [2024-12-13 19:14:28.704325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:122824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.376  [2024-12-13 19:14:28.704342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:28.704365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:28.704381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:28.704404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:28.704419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:28.704442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:28.704457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:28.704481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:28.704496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:28.704520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:28.704535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:28.704573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:28.704602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:28.704653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:28.704666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:28.704686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:28.704700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:28.704730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:28.704745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:28.704765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:28.704779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:28.704799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:28.704812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:28.704832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:28.704845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:28.704866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:28.704880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:28.704900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.377  [2024-12-13 19:14:28.704913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:28.706856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.377  [2024-12-13 19:14:28.706879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:28.706905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.377  [2024-12-13 19:14:28.706920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:28.706944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.377  [2024-12-13 19:14:28.706959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:28.706983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:122384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.377  [2024-12-13 19:14:28.706997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:28.707022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.377  [2024-12-13 19:14:28.707036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:28.707060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.377  [2024-12-13 19:14:28.707074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:28.707099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.377  [2024-12-13 19:14:28.707121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:28.707147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.377  [2024-12-13 19:14:28.707162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:28.707385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.377  [2024-12-13 19:14:28.707412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:30:38.377       9757.67 IOPS,    38.12 MiB/s
[2024-12-13T19:15:10.201Z]      9327.94 IOPS,    36.44 MiB/s
[2024-12-13T19:15:10.201Z]      9338.18 IOPS,    36.48 MiB/s
[2024-12-13T19:15:10.201Z]      9348.56 IOPS,    36.52 MiB/s
[2024-12-13T19:15:10.201Z]      9355.89 IOPS,    36.55 MiB/s
[2024-12-13T19:15:10.201Z]      9373.25 IOPS,    36.61 MiB/s
[2024-12-13T19:15:10.201Z]      9381.67 IOPS,    36.65 MiB/s
[2024-12-13T19:15:10.201Z] [2024-12-13 19:14:35.738924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:35.738979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:35.739045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:35.739063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:35.739083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:35.739096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:35.739115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:35.739129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:35.739147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:35.739161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:35.739179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:35.739192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:35.739210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:35.739223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:35.739290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:35.739307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:35.739386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:35.739410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:35.739458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:35.739475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:35.739496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:35.739510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:35.739531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:35.739545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:35.739566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:35.739595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:35.739615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:35.739658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:30:38.377  [2024-12-13 19:14:35.739677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.377  [2024-12-13 19:14:35.739690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.739709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.739722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.739769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.739787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.739809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.739823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.739858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.739872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.739892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.739905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.739925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.739939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.739968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.739983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.740003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.740016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.740037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.740051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.740109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.740128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.740150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.740179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.740199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.740212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.740248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.740280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.740302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.740317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.740354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.740371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.740393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.740409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.740431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.740446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.741399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.741423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.741449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.741488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.741513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.741529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001d p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.741551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.741566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.741619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.741656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.741692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.741706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.741753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.378  [2024-12-13 19:14:35.741772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.741796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.378  [2024-12-13 19:14:35.741811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.741835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.378  [2024-12-13 19:14:35.741850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.741873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:54552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.378  [2024-12-13 19:14:35.741888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.741911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:54560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.378  [2024-12-13 19:14:35.741926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.741949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.378  [2024-12-13 19:14:35.741964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.741988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:54576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.378  [2024-12-13 19:14:35.742003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.742042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.378  [2024-12-13 19:14:35.742078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.742099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:54592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.378  [2024-12-13 19:14:35.742113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.742133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.378  [2024-12-13 19:14:35.742146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.742167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:54608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.378  [2024-12-13 19:14:35.742191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.742212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.378  [2024-12-13 19:14:35.742237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.742274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:54624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.378  [2024-12-13 19:14:35.742289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.742333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:54632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.378  [2024-12-13 19:14:35.742350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.742373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:54640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.378  [2024-12-13 19:14:35.742388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:30:38.378  [2024-12-13 19:14:35.742411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:54648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.742426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.742448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:54656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.742463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.742486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.742501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.742523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:54672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.742538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.742560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:54680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.742575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.742634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:54688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.742649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.742670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:54696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.742684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.742706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:54704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.742720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.742741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:54712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.742755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.742776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:54720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.742789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.742810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:54728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.742825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.742845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:54736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.742859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.742882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:54744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.742896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.743029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.743070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:54768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.743108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.743146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.379  [2024-12-13 19:14:35.743192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:54784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.743246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.743299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.743337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.743376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.743413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.743452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.743489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.743527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.743580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.743617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:54864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.743655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.743700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.743737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004e p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.743774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.743812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:54904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.379  [2024-12-13 19:14:35.743849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.379  [2024-12-13 19:14:35.743886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.379  [2024-12-13 19:14:35.743923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.379  [2024-12-13 19:14:35.743959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:30:38.379  [2024-12-13 19:14:35.743982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.379  [2024-12-13 19:14:35.743996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.380  [2024-12-13 19:14:35.744032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.380  [2024-12-13 19:14:35.744070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.380  [2024-12-13 19:14:35.744107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.380  [2024-12-13 19:14:35.744149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.380  [2024-12-13 19:14:35.744188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.744226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.744293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:54928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.744347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:54936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.744387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.744427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.744467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.744506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.744546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:54976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.744599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.744638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.744682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.744722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.744760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.744797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.744860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:55032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.744898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.744935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.744972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.744995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.745009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.745033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.745047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.745070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.745084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.745106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.745120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.745143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:55088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.745157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.745186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.745201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.745224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.745254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.745289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.745307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.745337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.745352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.745375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:55128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.745389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.745414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.380  [2024-12-13 19:14:35.745428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:30:38.380  [2024-12-13 19:14:35.745452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.381  [2024-12-13 19:14:35.745466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:35.745495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.381  [2024-12-13 19:14:35.745511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:35.745540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.381  [2024-12-13 19:14:35.745555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:30:38.381       9246.59 IOPS,    36.12 MiB/s
[2024-12-13T19:15:10.205Z]      8844.57 IOPS,    34.55 MiB/s
[2024-12-13T19:15:10.205Z]      8476.04 IOPS,    33.11 MiB/s
[2024-12-13T19:15:10.205Z]      8137.00 IOPS,    31.79 MiB/s
[2024-12-13T19:15:10.205Z]      7824.04 IOPS,    30.56 MiB/s
[2024-12-13T19:15:10.205Z]      7534.26 IOPS,    29.43 MiB/s
[2024-12-13T19:15:10.205Z]      7265.18 IOPS,    28.38 MiB/s
[2024-12-13T19:15:10.205Z]      7104.83 IOPS,    27.75 MiB/s
[2024-12-13T19:15:10.205Z]      7178.63 IOPS,    28.04 MiB/s
[2024-12-13T19:15:10.205Z]      7248.81 IOPS,    28.32 MiB/s
[2024-12-13T19:15:10.205Z]      7315.12 IOPS,    28.57 MiB/s
[2024-12-13T19:15:10.205Z]      7381.52 IOPS,    28.83 MiB/s
[2024-12-13T19:15:10.205Z]      7440.06 IOPS,    29.06 MiB/s
[2024-12-13T19:15:10.205Z]      7489.71 IOPS,    29.26 MiB/s
[2024-12-13T19:15:10.205Z] [2024-12-13 19:14:49.077607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.077684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.077761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.077802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.077827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.077843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.077863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.077878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.077899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.077914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.077934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.077948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.077969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.077983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.078003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.078018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.078050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.381  [2024-12-13 19:14:49.078064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.078337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.078360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.078376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.078390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.078405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.078418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.078432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.078445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.078460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.078473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.078499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.078513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.078528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.078542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.078556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.078571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.078615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.078642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.078656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.078683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.078696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.078708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.078721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.078733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.078747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.078758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.078772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.078783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.078812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.078824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.078838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.078850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.078863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.078876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.078889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.078918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.078938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.078951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.078966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.078979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.078993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.079005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.079019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.079032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.079046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.079059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.079073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.381  [2024-12-13 19:14:49.079086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.381  [2024-12-13 19:14:49.079101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.079983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.079995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.080009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.080021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.080034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.080047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.080067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.080080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.080094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.080106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.080120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.080132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.080146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.080158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.080172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.080184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.080197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.382  [2024-12-13 19:14:49.080209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.080223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.382  [2024-12-13 19:14:49.080267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.080283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.382  [2024-12-13 19:14:49.080307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.080325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.382  [2024-12-13 19:14:49.080338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.382  [2024-12-13 19:14:49.080353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.383  [2024-12-13 19:14:49.080367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.080382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.383  [2024-12-13 19:14:49.080395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.080410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.383  [2024-12-13 19:14:49.080423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.080439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.383  [2024-12-13 19:14:49.080458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.080475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.383  [2024-12-13 19:14:49.080488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.080503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.383  [2024-12-13 19:14:49.080516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.080532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.383  [2024-12-13 19:14:49.080546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.080561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.383  [2024-12-13 19:14:49.080574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.080590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.383  [2024-12-13 19:14:49.080604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.080619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.383  [2024-12-13 19:14:49.080661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.080675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.383  [2024-12-13 19:14:49.080687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.080701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.383  [2024-12-13 19:14:49.080713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.080726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.383  [2024-12-13 19:14:49.080738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.080752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.383  [2024-12-13 19:14:49.080764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.080778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.383  [2024-12-13 19:14:49.080790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.080803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.383  [2024-12-13 19:14:49.080815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.080834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.383  [2024-12-13 19:14:49.080847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.080861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.383  [2024-12-13 19:14:49.080873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.080887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.383  [2024-12-13 19:14:49.080899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.080913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.383  [2024-12-13 19:14:49.080925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.080939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.383  [2024-12-13 19:14:49.080951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.080971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.383  [2024-12-13 19:14:49.080984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.080998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.383  [2024-12-13 19:14:49.081010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.081024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.383  [2024-12-13 19:14:49.081036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.081050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.383  [2024-12-13 19:14:49.081062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.081076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.383  [2024-12-13 19:14:49.081088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.081102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.383  [2024-12-13 19:14:49.081114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.081128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.383  [2024-12-13 19:14:49.081141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.081154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.383  [2024-12-13 19:14:49.081166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.081185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:38.383  [2024-12-13 19:14:49.081198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.081226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.383  [2024-12-13 19:14:49.081255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4936 len:8 PRP1 0x0 PRP2 0x0
00:30:38.383  [2024-12-13 19:14:49.081296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.081392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:30:38.383  [2024-12-13 19:14:49.081416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.081432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:30:38.383  [2024-12-13 19:14:49.081445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.081459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:30:38.383  [2024-12-13 19:14:49.081471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.081485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:30:38.383  [2024-12-13 19:14:49.081498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.081512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:38.383  [2024-12-13 19:14:49.081526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.081550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e29c0 is same with the state(6) to be set
00:30:38.383  [2024-12-13 19:14:49.081812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.383  [2024-12-13 19:14:49.081835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.383  [2024-12-13 19:14:49.081847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4944 len:8 PRP1 0x0 PRP2 0x0
00:30:38.383  [2024-12-13 19:14:49.081860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.081878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.383  [2024-12-13 19:14:49.081889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.383  [2024-12-13 19:14:49.081900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4952 len:8 PRP1 0x0 PRP2 0x0
00:30:38.383  [2024-12-13 19:14:49.081913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.383  [2024-12-13 19:14:49.081926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.383  [2024-12-13 19:14:49.081936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.383  [2024-12-13 19:14:49.081946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:8 PRP1 0x0 PRP2 0x0
00:30:38.383  [2024-12-13 19:14:49.081970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.081985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.384  [2024-12-13 19:14:49.081995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.384  [2024-12-13 19:14:49.082013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4968 len:8 PRP1 0x0 PRP2 0x0
00:30:38.384  [2024-12-13 19:14:49.082027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.082063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.384  [2024-12-13 19:14:49.082087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.384  [2024-12-13 19:14:49.082097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4976 len:8 PRP1 0x0 PRP2 0x0
00:30:38.384  [2024-12-13 19:14:49.082108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.082120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.384  [2024-12-13 19:14:49.082129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.384  [2024-12-13 19:14:49.082139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4984 len:8 PRP1 0x0 PRP2 0x0
00:30:38.384  [2024-12-13 19:14:49.082151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.082163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.384  [2024-12-13 19:14:49.082172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.384  [2024-12-13 19:14:49.082181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:8 PRP1 0x0 PRP2 0x0
00:30:38.384  [2024-12-13 19:14:49.082193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.082205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.384  [2024-12-13 19:14:49.082213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.384  [2024-12-13 19:14:49.082223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5000 len:8 PRP1 0x0 PRP2 0x0
00:30:38.384  [2024-12-13 19:14:49.082257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.082270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.384  [2024-12-13 19:14:49.082285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.384  [2024-12-13 19:14:49.082309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5008 len:8 PRP1 0x0 PRP2 0x0
00:30:38.384  [2024-12-13 19:14:49.082323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.082337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.384  [2024-12-13 19:14:49.082346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.384  [2024-12-13 19:14:49.082356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5016 len:8 PRP1 0x0 PRP2 0x0
00:30:38.384  [2024-12-13 19:14:49.082368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.082381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.384  [2024-12-13 19:14:49.082390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.384  [2024-12-13 19:14:49.082406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:8 PRP1 0x0 PRP2 0x0
00:30:38.384  [2024-12-13 19:14:49.082419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.082432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.384  [2024-12-13 19:14:49.082441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.384  [2024-12-13 19:14:49.082456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5032 len:8 PRP1 0x0 PRP2 0x0
00:30:38.384  [2024-12-13 19:14:49.082469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.082482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.384  [2024-12-13 19:14:49.082491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.384  [2024-12-13 19:14:49.082500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5040 len:8 PRP1 0x0 PRP2 0x0
00:30:38.384  [2024-12-13 19:14:49.082513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.082525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.384  [2024-12-13 19:14:49.082534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.384  [2024-12-13 19:14:49.082544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5048 len:8 PRP1 0x0 PRP2 0x0
00:30:38.384  [2024-12-13 19:14:49.082556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.082569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.384  [2024-12-13 19:14:49.082593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.384  [2024-12-13 19:14:49.082602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:8 PRP1 0x0 PRP2 0x0
00:30:38.384  [2024-12-13 19:14:49.082613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.082625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.384  [2024-12-13 19:14:49.082634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.384  [2024-12-13 19:14:49.082643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4168 len:8 PRP1 0x0 PRP2 0x0
00:30:38.384  [2024-12-13 19:14:49.082655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.082667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.384  [2024-12-13 19:14:49.082680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.384  [2024-12-13 19:14:49.096029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4176 len:8 PRP1 0x0 PRP2 0x0
00:30:38.384  [2024-12-13 19:14:49.096074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.096100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.384  [2024-12-13 19:14:49.096115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.384  [2024-12-13 19:14:49.096130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4184 len:8 PRP1 0x0 PRP2 0x0
00:30:38.384  [2024-12-13 19:14:49.096147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.096164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.384  [2024-12-13 19:14:49.096194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.384  [2024-12-13 19:14:49.096210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4192 len:8 PRP1 0x0 PRP2 0x0
00:30:38.384  [2024-12-13 19:14:49.096265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.096296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.384  [2024-12-13 19:14:49.096306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.384  [2024-12-13 19:14:49.096317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4200 len:8 PRP1 0x0 PRP2 0x0
00:30:38.384  [2024-12-13 19:14:49.096330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.096343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.384  [2024-12-13 19:14:49.096353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.384  [2024-12-13 19:14:49.096363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4208 len:8 PRP1 0x0 PRP2 0x0
00:30:38.384  [2024-12-13 19:14:49.096376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.096389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.384  [2024-12-13 19:14:49.096398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.384  [2024-12-13 19:14:49.096409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4216 len:8 PRP1 0x0 PRP2 0x0
00:30:38.384  [2024-12-13 19:14:49.096421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.096434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.384  [2024-12-13 19:14:49.096444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.384  [2024-12-13 19:14:49.096454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4224 len:8 PRP1 0x0 PRP2 0x0
00:30:38.384  [2024-12-13 19:14:49.096467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.096480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.384  [2024-12-13 19:14:49.096490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.384  [2024-12-13 19:14:49.096500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4232 len:8 PRP1 0x0 PRP2 0x0
00:30:38.384  [2024-12-13 19:14:49.096513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.096527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.384  [2024-12-13 19:14:49.096537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.384  [2024-12-13 19:14:49.096547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4240 len:8 PRP1 0x0 PRP2 0x0
00:30:38.384  [2024-12-13 19:14:49.096560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.096573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.384  [2024-12-13 19:14:49.096582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.384  [2024-12-13 19:14:49.096592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4248 len:8 PRP1 0x0 PRP2 0x0
00:30:38.384  [2024-12-13 19:14:49.096605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.384  [2024-12-13 19:14:49.096655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.385  [2024-12-13 19:14:49.096665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.385  [2024-12-13 19:14:49.096674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:8 PRP1 0x0 PRP2 0x0
00:30:38.385  [2024-12-13 19:14:49.096686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.385  [2024-12-13 19:14:49.096699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.385  [2024-12-13 19:14:49.096707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.385  [2024-12-13 19:14:49.096717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4264 len:8 PRP1 0x0 PRP2 0x0
00:30:38.385  [2024-12-13 19:14:49.096729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.385  [2024-12-13 19:14:49.096741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.385  [2024-12-13 19:14:49.096750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.385  [2024-12-13 19:14:49.096759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4272 len:8 PRP1 0x0 PRP2 0x0
00:30:38.385  [2024-12-13 19:14:49.096772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.385  [2024-12-13 19:14:49.096783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.385  [2024-12-13 19:14:49.096792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.385  [2024-12-13 19:14:49.096802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4280 len:8 PRP1 0x0 PRP2 0x0
00:30:38.385  [2024-12-13 19:14:49.096814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.385  [2024-12-13 19:14:49.096826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.385  [2024-12-13 19:14:49.096835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.385  [2024-12-13 19:14:49.096844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:8 PRP1 0x0 PRP2 0x0
00:30:38.385  [2024-12-13 19:14:49.096872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.385  [2024-12-13 19:14:49.096884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.385  [2024-12-13 19:14:49.096894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.385  [2024-12-13 19:14:49.096903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4040 len:8 PRP1 0x0 PRP2 0x0
00:30:38.385  [2024-12-13 19:14:49.096916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.385  [2024-12-13 19:14:49.096928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.385  [2024-12-13 19:14:49.096938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.385  [2024-12-13 19:14:49.096948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4296 len:8 PRP1 0x0 PRP2 0x0
00:30:38.385  [2024-12-13 19:14:49.096961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.385  [2024-12-13 19:14:49.096974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.385  [2024-12-13 19:14:49.096983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.385  [2024-12-13 19:14:49.096993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4304 len:8 PRP1 0x0 PRP2 0x0
00:30:38.385  [2024-12-13 19:14:49.097011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.385  [2024-12-13 19:14:49.097025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.385  [2024-12-13 19:14:49.097034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.385  [2024-12-13 19:14:49.097044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4312 len:8 PRP1 0x0 PRP2 0x0
00:30:38.385  [2024-12-13 19:14:49.097057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.385  [2024-12-13 19:14:49.097069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.385  [2024-12-13 19:14:49.097105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.385  [2024-12-13 19:14:49.097114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:8 PRP1 0x0 PRP2 0x0
00:30:38.385  [2024-12-13 19:14:49.097126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.385  [2024-12-13 19:14:49.097138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.385  [2024-12-13 19:14:49.097147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.385  [2024-12-13 19:14:49.097156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4328 len:8 PRP1 0x0 PRP2 0x0
00:30:38.385  [2024-12-13 19:14:49.097168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.385  [2024-12-13 19:14:49.097180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.385  [2024-12-13 19:14:49.097189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.385  [2024-12-13 19:14:49.097198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4336 len:8 PRP1 0x0 PRP2 0x0
00:30:38.385  [2024-12-13 19:14:49.097210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.385  [2024-12-13 19:14:49.097222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.385  [2024-12-13 19:14:49.097247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.385  [2024-12-13 19:14:49.097257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4344 len:8 PRP1 0x0 PRP2 0x0
00:30:38.385  [2024-12-13 19:14:49.097270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.385  [2024-12-13 19:14:49.097283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.385  [2024-12-13 19:14:49.097293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.385  [2024-12-13 19:14:49.097303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:8 PRP1 0x0 PRP2 0x0
00:30:38.385  [2024-12-13 19:14:49.097330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.385  [2024-12-13 19:14:49.097345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.385  [2024-12-13 19:14:49.097354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.385  [2024-12-13 19:14:49.097365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4360 len:8 PRP1 0x0 PRP2 0x0
00:30:38.385  [2024-12-13 19:14:49.097378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.385  [2024-12-13 19:14:49.097391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.385  [2024-12-13 19:14:49.097400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.385  [2024-12-13 19:14:49.097416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4368 len:8 PRP1 0x0 PRP2 0x0
00:30:38.385  [2024-12-13 19:14:49.097430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.385  [2024-12-13 19:14:49.097443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.385  [2024-12-13 19:14:49.097452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.385  [2024-12-13 19:14:49.097462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4376 len:8 PRP1 0x0 PRP2 0x0
00:30:38.385  [2024-12-13 19:14:49.097476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.385  [2024-12-13 19:14:49.097489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.385  [2024-12-13 19:14:49.097498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.385  [2024-12-13 19:14:49.097508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:8 PRP1 0x0 PRP2 0x0
00:30:38.385  [2024-12-13 19:14:49.097521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.385  [2024-12-13 19:14:49.097534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.385  [2024-12-13 19:14:49.097543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.385  [2024-12-13 19:14:49.097553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4392 len:8 PRP1 0x0 PRP2 0x0
00:30:38.385  [2024-12-13 19:14:49.097595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.385  [2024-12-13 19:14:49.097621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.385  [2024-12-13 19:14:49.097641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.385  [2024-12-13 19:14:49.097650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4400 len:8 PRP1 0x0 PRP2 0x0
00:30:38.385  [2024-12-13 19:14:49.097662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.385  [2024-12-13 19:14:49.097673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.385  [2024-12-13 19:14:49.097681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.385  [2024-12-13 19:14:49.097690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4408 len:8 PRP1 0x0 PRP2 0x0
00:30:38.385  [2024-12-13 19:14:49.097701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.097713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.097748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.097759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:8 PRP1 0x0 PRP2 0x0
00:30:38.386  [2024-12-13 19:14:49.097772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.097785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.097794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.097804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4424 len:8 PRP1 0x0 PRP2 0x0
00:30:38.386  [2024-12-13 19:14:49.097817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.097830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.097846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.097856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4432 len:8 PRP1 0x0 PRP2 0x0
00:30:38.386  [2024-12-13 19:14:49.097869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.097883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.097892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.097902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4440 len:8 PRP1 0x0 PRP2 0x0
00:30:38.386  [2024-12-13 19:14:49.097915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.097928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.097937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.097947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:8 PRP1 0x0 PRP2 0x0
00:30:38.386  [2024-12-13 19:14:49.097960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.097973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.097982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.097992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4456 len:8 PRP1 0x0 PRP2 0x0
00:30:38.386  [2024-12-13 19:14:49.098005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.098017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.098027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.098047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4464 len:8 PRP1 0x0 PRP2 0x0
00:30:38.386  [2024-12-13 19:14:49.098071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.098083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.098091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.098099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4472 len:8 PRP1 0x0 PRP2 0x0
00:30:38.386  [2024-12-13 19:14:49.098111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.098122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.098131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.098140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:8 PRP1 0x0 PRP2 0x0
00:30:38.386  [2024-12-13 19:14:49.098152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.098163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.098172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.098180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4488 len:8 PRP1 0x0 PRP2 0x0
00:30:38.386  [2024-12-13 19:14:49.098192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.098208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.098217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.098226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4496 len:8 PRP1 0x0 PRP2 0x0
00:30:38.386  [2024-12-13 19:14:49.098263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.098276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.098297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.098307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4504 len:8 PRP1 0x0 PRP2 0x0
00:30:38.386  [2024-12-13 19:14:49.098320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.098334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.098343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.098353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:8 PRP1 0x0 PRP2 0x0
00:30:38.386  [2024-12-13 19:14:49.098366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.098379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.098388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.098398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4520 len:8 PRP1 0x0 PRP2 0x0
00:30:38.386  [2024-12-13 19:14:49.098411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.098424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.098433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.098443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4528 len:8 PRP1 0x0 PRP2 0x0
00:30:38.386  [2024-12-13 19:14:49.098456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.098469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.098479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.098489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4536 len:8 PRP1 0x0 PRP2 0x0
00:30:38.386  [2024-12-13 19:14:49.098502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.098515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.098525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.098540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:8 PRP1 0x0 PRP2 0x0
00:30:38.386  [2024-12-13 19:14:49.098554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.098568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.098591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.098616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4552 len:8 PRP1 0x0 PRP2 0x0
00:30:38.386  [2024-12-13 19:14:49.098647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.098660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.098668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.098677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4560 len:8 PRP1 0x0 PRP2 0x0
00:30:38.386  [2024-12-13 19:14:49.098689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.098700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.098709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.098718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4568 len:8 PRP1 0x0 PRP2 0x0
00:30:38.386  [2024-12-13 19:14:49.098729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.098741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.098749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.098758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:8 PRP1 0x0 PRP2 0x0
00:30:38.386  [2024-12-13 19:14:49.098770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.098782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.098791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.098800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4584 len:8 PRP1 0x0 PRP2 0x0
00:30:38.386  [2024-12-13 19:14:49.098811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.098822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.098831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.098840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4592 len:8 PRP1 0x0 PRP2 0x0
00:30:38.386  [2024-12-13 19:14:49.098851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.386  [2024-12-13 19:14:49.098863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.386  [2024-12-13 19:14:49.098871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.386  [2024-12-13 19:14:49.098880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4600 len:8 PRP1 0x0 PRP2 0x0
00:30:38.387  [2024-12-13 19:14:49.098892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.387  [2024-12-13 19:14:49.098904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.387  [2024-12-13 19:14:49.098912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.387  [2024-12-13 19:14:49.098922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:8 PRP1 0x0 PRP2 0x0
00:30:38.387  [2024-12-13 19:14:49.098933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.387  [2024-12-13 19:14:49.098944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.387  [2024-12-13 19:14:49.098953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.387  [2024-12-13 19:14:49.098973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4616 len:8 PRP1 0x0 PRP2 0x0
00:30:38.387  [2024-12-13 19:14:49.098986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.387  [2024-12-13 19:14:49.098998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.387  [2024-12-13 19:14:49.099006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.387  [2024-12-13 19:14:49.099016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4624 len:8 PRP1 0x0 PRP2 0x0
00:30:38.387  [2024-12-13 19:14:49.099027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.387  [2024-12-13 19:14:49.099038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.387  [2024-12-13 19:14:49.099046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.387  [2024-12-13 19:14:49.099055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4632 len:8 PRP1 0x0 PRP2 0x0
00:30:38.387  [2024-12-13 19:14:49.099066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.387  [2024-12-13 19:14:49.099078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.387  [2024-12-13 19:14:49.099086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.387  [2024-12-13 19:14:49.099095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:8 PRP1 0x0 PRP2 0x0
00:30:38.387  [2024-12-13 19:14:49.099106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.387  [2024-12-13 19:14:49.099118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.387  [2024-12-13 19:14:49.099126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.387  [2024-12-13 19:14:49.099135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4648 len:8 PRP1 0x0 PRP2 0x0
00:30:38.387  [2024-12-13 19:14:49.099146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.387  [2024-12-13 19:14:49.099158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.387  [2024-12-13 19:14:49.099167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.387  [2024-12-13 19:14:49.099176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4656 len:8 PRP1 0x0 PRP2 0x0
00:30:38.387  [2024-12-13 19:14:49.099187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.387  [2024-12-13 19:14:49.099198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.387  [2024-12-13 19:14:49.099206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.387  [2024-12-13 19:14:49.099215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4664 len:8 PRP1 0x0 PRP2 0x0
00:30:38.387  [2024-12-13 19:14:49.099242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.387  [2024-12-13 19:14:49.099271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.387  [2024-12-13 19:14:49.099280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.387  [2024-12-13 19:14:49.099299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:8 PRP1 0x0 PRP2 0x0
00:30:38.387  [2024-12-13 19:14:49.099315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.387  [2024-12-13 19:14:49.099335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.387  [2024-12-13 19:14:49.099345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.387  [2024-12-13 19:14:49.099360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4680 len:8 PRP1 0x0 PRP2 0x0
00:30:38.387  [2024-12-13 19:14:49.099373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.387  [2024-12-13 19:14:49.099386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.387  [2024-12-13 19:14:49.099396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.387  [2024-12-13 19:14:49.099406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4688 len:8 PRP1 0x0 PRP2 0x0
00:30:38.387  [2024-12-13 19:14:49.099418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.387  [2024-12-13 19:14:49.099432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.387  [2024-12-13 19:14:49.099441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.387  [2024-12-13 19:14:49.099451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4696 len:8 PRP1 0x0 PRP2 0x0
00:30:38.387  [2024-12-13 19:14:49.099464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.387  [2024-12-13 19:14:49.099477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.387  [2024-12-13 19:14:49.099486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.387  [2024-12-13 19:14:49.099496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:8 PRP1 0x0 PRP2 0x0
00:30:38.387  [2024-12-13 19:14:49.099509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.387  [2024-12-13 19:14:49.099522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.387  [2024-12-13 19:14:49.099531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.387  [2024-12-13 19:14:49.099542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4712 len:8 PRP1 0x0 PRP2 0x0
00:30:38.387  [2024-12-13 19:14:49.099554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.387  [2024-12-13 19:14:49.099568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.387  [2024-12-13 19:14:49.099577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.387  [2024-12-13 19:14:49.099587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4720 len:8 PRP1 0x0 PRP2 0x0
00:30:38.387  [2024-12-13 19:14:49.099629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.387  [2024-12-13 19:14:49.099655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.387  [2024-12-13 19:14:49.099664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.387  [2024-12-13 19:14:49.099673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4728 len:8 PRP1 0x0 PRP2 0x0
00:30:38.387  [2024-12-13 19:14:49.099684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.387  [2024-12-13 19:14:49.099710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.387  [2024-12-13 19:14:49.099718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.387  [2024-12-13 19:14:49.099727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:8 PRP1 0x0 PRP2 0x0
00:30:38.387  [2024-12-13 19:14:49.099743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.387  [2024-12-13 19:14:49.099754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.387  [2024-12-13 19:14:49.099762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.387  [2024-12-13 19:14:49.099776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4744 len:8 PRP1 0x0 PRP2 0x0
00:30:38.387  [2024-12-13 19:14:49.099787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.387  [2024-12-13 19:14:49.099799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.387  [2024-12-13 19:14:49.099807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.387  [2024-12-13 19:14:49.099815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4752 len:8 PRP1 0x0 PRP2 0x0
00:30:38.387  [2024-12-13 19:14:49.099826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.387  [2024-12-13 19:14:49.099837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.387  [2024-12-13 19:14:49.099846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.387  [2024-12-13 19:14:49.099854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4760 len:8 PRP1 0x0 PRP2 0x0
00:30:38.387  [2024-12-13 19:14:49.099866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.387  [2024-12-13 19:14:49.099877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.387  [2024-12-13 19:14:49.099885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.387  [2024-12-13 19:14:49.099894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:8 PRP1 0x0 PRP2 0x0
00:30:38.387  [2024-12-13 19:14:49.099905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.387  [2024-12-13 19:14:49.099916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.387  [2024-12-13 19:14:49.099924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.387  [2024-12-13 19:14:49.099932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4776 len:8 PRP1 0x0 PRP2 0x0
00:30:38.387  [2024-12-13 19:14:49.099944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.387  [2024-12-13 19:14:49.099955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.387  [2024-12-13 19:14:49.099963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.387  [2024-12-13 19:14:49.099972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4784 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.099983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.099994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.100002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.388  [2024-12-13 19:14:49.100011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4048 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.100022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.107641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.107681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.388  [2024-12-13 19:14:49.107715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4056 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.107737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.107756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.107769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.388  [2024-12-13 19:14:49.107784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4064 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.107801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.107819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.107831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.388  [2024-12-13 19:14:49.107845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4072 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.107861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.107879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.107892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.388  [2024-12-13 19:14:49.107905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4080 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.107922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.107940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.107952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.388  [2024-12-13 19:14:49.107966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4088 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.107982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.108000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.108012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.388  [2024-12-13 19:14:49.108026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4096 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.108043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.108060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.108073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.388  [2024-12-13 19:14:49.108086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4104 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.108103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.108120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.108133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.388  [2024-12-13 19:14:49.108147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4112 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.108164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.108181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.108201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.388  [2024-12-13 19:14:49.108216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4120 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.108266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.108284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.108297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.388  [2024-12-13 19:14:49.108311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4128 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.108327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.108344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.108356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.388  [2024-12-13 19:14:49.108370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4136 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.108386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.108404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.108416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.388  [2024-12-13 19:14:49.108429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4144 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.108447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.108468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.108480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.388  [2024-12-13 19:14:49.108493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4152 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.108510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.108527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.108539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.388  [2024-12-13 19:14:49.108552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4160 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.108569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.108586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.108598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.388  [2024-12-13 19:14:49.108611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4792 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.108628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.108645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.108657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.388  [2024-12-13 19:14:49.108670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.108687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.108713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.108726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.388  [2024-12-13 19:14:49.108739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4808 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.108756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.108773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.108785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.388  [2024-12-13 19:14:49.108799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4816 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.108815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.108832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.108845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.388  [2024-12-13 19:14:49.108858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4824 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.108875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.108892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.108904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.388  [2024-12-13 19:14:49.108917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.108934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.108952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.108964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.388  [2024-12-13 19:14:49.108977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4840 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.108993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.109011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.109023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.388  [2024-12-13 19:14:49.109036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4848 len:8 PRP1 0x0 PRP2 0x0
00:30:38.388  [2024-12-13 19:14:49.109053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.388  [2024-12-13 19:14:49.109070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.388  [2024-12-13 19:14:49.109083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.389  [2024-12-13 19:14:49.109096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4856 len:8 PRP1 0x0 PRP2 0x0
00:30:38.389  [2024-12-13 19:14:49.109114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.389  [2024-12-13 19:14:49.109131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.389  [2024-12-13 19:14:49.109143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.389  [2024-12-13 19:14:49.109156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:8 PRP1 0x0 PRP2 0x0
00:30:38.389  [2024-12-13 19:14:49.109180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.389  [2024-12-13 19:14:49.109199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.389  [2024-12-13 19:14:49.109211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.389  [2024-12-13 19:14:49.109261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4872 len:8 PRP1 0x0 PRP2 0x0
00:30:38.389  [2024-12-13 19:14:49.109289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.389  [2024-12-13 19:14:49.109307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.389  [2024-12-13 19:14:49.109319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.389  [2024-12-13 19:14:49.109333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4880 len:8 PRP1 0x0 PRP2 0x0
00:30:38.389  [2024-12-13 19:14:49.109349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.389  [2024-12-13 19:14:49.109367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.389  [2024-12-13 19:14:49.109379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.389  [2024-12-13 19:14:49.109393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4888 len:8 PRP1 0x0 PRP2 0x0
00:30:38.389  [2024-12-13 19:14:49.109410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.389  [2024-12-13 19:14:49.109428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.389  [2024-12-13 19:14:49.109440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.389  [2024-12-13 19:14:49.109453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:8 PRP1 0x0 PRP2 0x0
00:30:38.389  [2024-12-13 19:14:49.109470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.389  [2024-12-13 19:14:49.109488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.389  [2024-12-13 19:14:49.109500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.389  [2024-12-13 19:14:49.109513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4904 len:8 PRP1 0x0 PRP2 0x0
00:30:38.389  [2024-12-13 19:14:49.109530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.389  [2024-12-13 19:14:49.109547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.389  [2024-12-13 19:14:49.109560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.389  [2024-12-13 19:14:49.109573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4912 len:8 PRP1 0x0 PRP2 0x0
00:30:38.389  [2024-12-13 19:14:49.109589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.389  [2024-12-13 19:14:49.109613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.389  [2024-12-13 19:14:49.109626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.389  [2024-12-13 19:14:49.109639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4920 len:8 PRP1 0x0 PRP2 0x0
00:30:38.389  [2024-12-13 19:14:49.109657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.389  [2024-12-13 19:14:49.109674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.389  [2024-12-13 19:14:49.109686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.389  [2024-12-13 19:14:49.109708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:8 PRP1 0x0 PRP2 0x0
00:30:38.389  [2024-12-13 19:14:49.109752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.389  [2024-12-13 19:14:49.109772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:38.389  [2024-12-13 19:14:49.109784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:38.389  [2024-12-13 19:14:49.109797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4936 len:8 PRP1 0x0 PRP2 0x0
00:30:38.389  [2024-12-13 19:14:49.109814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:38.389  [2024-12-13 19:14:49.109964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e29c0 (9): Bad file descriptor
00:30:38.389  [2024-12-13 19:14:49.112440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:30:38.389  [2024-12-13 19:14:49.112926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:30:38.389  [2024-12-13 19:14:49.112968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e29c0 with addr=10.0.0.3, port=4421
00:30:38.389  [2024-12-13 19:14:49.112991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e29c0 is same with the state(6) to be set
00:30:38.389  [2024-12-13 19:14:49.113758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e29c0 (9): Bad file descriptor
00:30:38.389  [2024-12-13 19:14:49.114344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:30:38.389  [2024-12-13 19:14:49.114380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:30:38.389  [2024-12-13 19:14:49.114400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:30:38.389  [2024-12-13 19:14:49.114420] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:30:38.389  [2024-12-13 19:14:49.114439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:30:38.389       7546.64 IOPS,    29.48 MiB/s
[2024-12-13T19:15:10.213Z]      7601.62 IOPS,    29.69 MiB/s
[2024-12-13T19:15:10.213Z]      7668.24 IOPS,    29.95 MiB/s
[2024-12-13T19:15:10.213Z]      7729.03 IOPS,    30.19 MiB/s
[2024-12-13T19:15:10.213Z]      7787.55 IOPS,    30.42 MiB/s
[2024-12-13T19:15:10.213Z]      7839.32 IOPS,    30.62 MiB/s
[2024-12-13T19:15:10.213Z]      7892.14 IOPS,    30.83 MiB/s
[2024-12-13T19:15:10.213Z]      7926.77 IOPS,    30.96 MiB/s
[2024-12-13T19:15:10.213Z]      7972.11 IOPS,    31.14 MiB/s
[2024-12-13T19:15:10.213Z]      8017.02 IOPS,    31.32 MiB/s
[2024-12-13T19:15:10.213Z] [2024-12-13 19:14:59.191564] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful.
00:30:38.389       8044.67 IOPS,    31.42 MiB/s
[2024-12-13T19:15:10.213Z]      8064.26 IOPS,    31.50 MiB/s
[2024-12-13T19:15:10.213Z]      8076.42 IOPS,    31.55 MiB/s
[2024-12-13T19:15:10.213Z]      8085.08 IOPS,    31.58 MiB/s
[2024-12-13T19:15:10.213Z]      8091.24 IOPS,    31.61 MiB/s
[2024-12-13T19:15:10.213Z]      8112.69 IOPS,    31.69 MiB/s
[2024-12-13T19:15:10.213Z]      8129.38 IOPS,    31.76 MiB/s
[2024-12-13T19:15:10.213Z]      8147.09 IOPS,    31.82 MiB/s
[2024-12-13T19:15:10.213Z]      8161.44 IOPS,    31.88 MiB/s
[2024-12-13T19:15:10.213Z]      8179.36 IOPS,    31.95 MiB/s
[2024-12-13T19:15:10.213Z] Received shutdown signal, test time was about 55.341940 seconds
00:30:38.389  
00:30:38.389                                                                                                  Latency(us)
00:30:38.389  
[2024-12-13T19:15:10.213Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:30:38.389  Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:30:38.389  	 Verification LBA range: start 0x0 length 0x4000
00:30:38.389  	 Nvme0n1             :      55.34    8179.14      31.95       0.00     0.00   15624.10    1705.43 7046430.72
00:30:38.389  
[2024-12-13T19:15:10.213Z]  ===================================================================================================================
00:30:38.389  
[2024-12-13T19:15:10.213Z]  Total                       :               8179.14      31.95       0.00     0.00   15624.10    1705.43 7046430.72
00:30:38.389   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:30:38.389   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT
00:30:38.389   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt
00:30:38.389   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini
00:30:38.389   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup
00:30:38.389   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync
00:30:38.389   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:30:38.389   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e
00:30:38.389   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20}
00:30:38.389   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:30:38.389  rmmod nvme_tcp
00:30:38.389  rmmod nvme_fabrics
00:30:38.389  rmmod nvme_keyring
00:30:38.390   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:30:38.390   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e
00:30:38.390   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0
00:30:38.390   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 117589 ']'
00:30:38.390   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 117589
00:30:38.390   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 117589 ']'
00:30:38.390   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 117589
00:30:38.390    19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname
00:30:38.390   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:30:38.390    19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117589
00:30:38.390   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:30:38.390   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:30:38.390  killing process with pid 117589
00:30:38.390   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117589'
00:30:38.390   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 117589
00:30:38.390   19:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 117589
00:30:38.390   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:30:38.390   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:30:38.390   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:30:38.390   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr
00:30:38.390   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save
00:30:38.390   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:30:38.390   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore
00:30:38.390   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:30:38.390   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:30:38.390   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:30:38.649   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:30:38.649   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:30:38.649   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:30:38.649   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:30:38.649   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:30:38.649   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:30:38.649   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:30:38.649   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:30:38.649   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:30:38.649   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:30:38.649   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:30:38.649   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:30:38.649   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns
00:30:38.649   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:30:38.649   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:30:38.649    19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:30:38.649   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0
00:30:38.649  
00:30:38.649  real	1m1.124s
00:30:38.649  user	2m53.551s
00:30:38.649  sys	0m13.280s
00:30:38.649   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable
00:30:38.649   19:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x
00:30:38.649  ************************************
00:30:38.649  END TEST nvmf_host_multipath
00:30:38.649  ************************************
00:30:38.649   19:15:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp
00:30:38.649   19:15:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:30:38.649   19:15:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:30:38.649   19:15:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:30:38.649  ************************************
00:30:38.649  START TEST nvmf_timeout
00:30:38.649  ************************************
00:30:38.649   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp
00:30:38.908  * Looking for test storage...
00:30:38.908  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:30:38.908     19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version
00:30:38.908     19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-:
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-:
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<'
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 ))
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:30:38.908     19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1
00:30:38.908     19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1
00:30:38.908     19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:30:38.908     19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1
00:30:38.908     19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2
00:30:38.908     19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2
00:30:38.908     19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:30:38.908     19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:30:38.908  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:38.908  		--rc genhtml_branch_coverage=1
00:30:38.908  		--rc genhtml_function_coverage=1
00:30:38.908  		--rc genhtml_legend=1
00:30:38.908  		--rc geninfo_all_blocks=1
00:30:38.908  		--rc geninfo_unexecuted_blocks=1
00:30:38.908  		
00:30:38.908  		'
00:30:38.908    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:30:38.908  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:38.908  		--rc genhtml_branch_coverage=1
00:30:38.908  		--rc genhtml_function_coverage=1
00:30:38.908  		--rc genhtml_legend=1
00:30:38.908  		--rc geninfo_all_blocks=1
00:30:38.908  		--rc geninfo_unexecuted_blocks=1
00:30:38.908  		
00:30:38.908  		'
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:30:38.909  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:38.909  		--rc genhtml_branch_coverage=1
00:30:38.909  		--rc genhtml_function_coverage=1
00:30:38.909  		--rc genhtml_legend=1
00:30:38.909  		--rc geninfo_all_blocks=1
00:30:38.909  		--rc geninfo_unexecuted_blocks=1
00:30:38.909  		
00:30:38.909  		'
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:30:38.909  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:38.909  		--rc genhtml_branch_coverage=1
00:30:38.909  		--rc genhtml_function_coverage=1
00:30:38.909  		--rc genhtml_legend=1
00:30:38.909  		--rc geninfo_all_blocks=1
00:30:38.909  		--rc geninfo_unexecuted_blocks=1
00:30:38.909  		
00:30:38.909  		'
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:30:38.909     19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:30:38.909     19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:30:38.909     19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob
00:30:38.909     19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:30:38.909     19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:30:38.909     19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:30:38.909      19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:38.909      19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:38.909      19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:38.909      19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH
00:30:38.909      19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:30:38.909  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:30:38.909    19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:30:38.909  Cannot find device "nvmf_init_br"
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:30:38.909  Cannot find device "nvmf_init_br2"
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:30:38.909  Cannot find device "nvmf_tgt_br"
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:30:38.909  Cannot find device "nvmf_tgt_br2"
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:30:38.909  Cannot find device "nvmf_init_br"
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true
00:30:38.909   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:30:38.909  Cannot find device "nvmf_init_br2"
00:30:38.910   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true
00:30:38.910   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:30:39.168  Cannot find device "nvmf_tgt_br"
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:30:39.168  Cannot find device "nvmf_tgt_br2"
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:30:39.168  Cannot find device "nvmf_br"
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:30:39.168  Cannot find device "nvmf_init_if"
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:30:39.168  Cannot find device "nvmf_init_if2"
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:30:39.168  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:30:39.168  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:30:39.168   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:30:39.169   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:30:39.169   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:30:39.169   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:30:39.169   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:30:39.169   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:30:39.169   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:30:39.169   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:30:39.169   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:30:39.169   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:30:39.169   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:30:39.169   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:30:39.427   19:15:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:30:39.427   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:30:39.427   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:30:39.427   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:30:39.427   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:30:39.427   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:30:39.427   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:30:39.427   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:30:39.427   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:30:39.428  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:30:39.428  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms
00:30:39.428  
00:30:39.428  --- 10.0.0.3 ping statistics ---
00:30:39.428  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:30:39.428  rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:30:39.428  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:30:39.428  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms
00:30:39.428  
00:30:39.428  --- 10.0.0.4 ping statistics ---
00:30:39.428  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:30:39.428  rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:30:39.428  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:30:39.428  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms
00:30:39.428  
00:30:39.428  --- 10.0.0.1 ping statistics ---
00:30:39.428  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:30:39.428  rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:30:39.428  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:30:39.428  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms
00:30:39.428  
00:30:39.428  --- 10.0.0.2 ping statistics ---
00:30:39.428  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:30:39.428  rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=118980
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 118980
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 118980 ']'
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100
00:30:39.428  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable
00:30:39.428   19:15:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x
00:30:39.428  [2024-12-13 19:15:11.171500] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:30:39.428  [2024-12-13 19:15:11.171600] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:30:39.686  [2024-12-13 19:15:11.318769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:30:39.686  [2024-12-13 19:15:11.356948] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:30:39.686  [2024-12-13 19:15:11.356997] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:30:39.686  [2024-12-13 19:15:11.357006] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:30:39.686  [2024-12-13 19:15:11.357013] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:30:39.686  [2024-12-13 19:15:11.357019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:30:39.686  [2024-12-13 19:15:11.358307] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:30:39.686  [2024-12-13 19:15:11.358315] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:30:40.622   19:15:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:30:40.622   19:15:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0
00:30:40.622   19:15:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:30:40.622   19:15:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable
00:30:40.622   19:15:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x
00:30:40.622   19:15:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:30:40.622   19:15:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:30:40.622   19:15:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:30:40.622  [2024-12-13 19:15:12.387901] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:30:40.622   19:15:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0
00:30:40.881  Malloc0
00:30:40.881   19:15:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:30:41.140   19:15:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:30:41.398   19:15:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:30:41.657  [2024-12-13 19:15:13.323176] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:30:41.657   19:15:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f
00:30:41.657   19:15:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=119070
00:30:41.657   19:15:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 119070 /var/tmp/bdevperf.sock
00:30:41.657   19:15:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 119070 ']'
00:30:41.657   19:15:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:30:41.657   19:15:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100
00:30:41.657   19:15:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:30:41.657  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:30:41.657   19:15:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable
00:30:41.657   19:15:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x
00:30:41.657  [2024-12-13 19:15:13.389849] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:30:41.657  [2024-12-13 19:15:13.389956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119070 ]
00:30:41.916  [2024-12-13 19:15:13.537596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:30:41.916  [2024-12-13 19:15:13.575602] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:30:41.916   19:15:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:30:41.916   19:15:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0
00:30:41.916   19:15:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1
00:30:42.175   19:15:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2
00:30:42.742  NVMe0n1
00:30:42.742   19:15:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=119100
00:30:42.742   19:15:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:30:42.742   19:15:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1
00:30:42.742  Running I/O for 10 seconds...
00:30:43.678   19:15:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:30:43.939       9823.00 IOPS,    38.37 MiB/s
[2024-12-13T19:15:15.763Z] [2024-12-13 19:15:15.514339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.939  [2024-12-13 19:15:15.514397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.939  [2024-12-13 19:15:15.514411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.939  [2024-12-13 19:15:15.514422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.939  [2024-12-13 19:15:15.514432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.939  [2024-12-13 19:15:15.514441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.939  [2024-12-13 19:15:15.514450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.939  [2024-12-13 19:15:15.514460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.939  [2024-12-13 19:15:15.514469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.939  [2024-12-13 19:15:15.514478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.939  [2024-12-13 19:15:15.514487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.939  [2024-12-13 19:15:15.514496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.939  [2024-12-13 19:15:15.514505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.939  [2024-12-13 19:15:15.514514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.939  [2024-12-13 19:15:15.514523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.939  [2024-12-13 19:15:15.514532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.939  [2024-12-13 19:15:15.514541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.939  [2024-12-13 19:15:15.514551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.939  [2024-12-13 19:15:15.514560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.939  [2024-12-13 19:15:15.514585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.939  [2024-12-13 19:15:15.514614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.939  [2024-12-13 19:15:15.514639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.939  [2024-12-13 19:15:15.514664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.514993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.515002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.515010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.515019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.515027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.515035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.515044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.515053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.515061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.515069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58630 is same with the state(6) to be set
00:30:43.940  [2024-12-13 19:15:15.515401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.940  [2024-12-13 19:15:15.515431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.940  [2024-12-13 19:15:15.515451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.940  [2024-12-13 19:15:15.515462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.940  [2024-12-13 19:15:15.515473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:89224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.940  [2024-12-13 19:15:15.515483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.940  [2024-12-13 19:15:15.515494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.940  [2024-12-13 19:15:15.515503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.940  [2024-12-13 19:15:15.515514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.940  [2024-12-13 19:15:15.515522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.940  [2024-12-13 19:15:15.515533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.940  [2024-12-13 19:15:15.515542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.940  [2024-12-13 19:15:15.515553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:89256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.940  [2024-12-13 19:15:15.515562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.940  [2024-12-13 19:15:15.515573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.940  [2024-12-13 19:15:15.515582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.940  [2024-12-13 19:15:15.515593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.940  [2024-12-13 19:15:15.515602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.940  [2024-12-13 19:15:15.515613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.940  [2024-12-13 19:15:15.515637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.940  [2024-12-13 19:15:15.515647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.940  [2024-12-13 19:15:15.515655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.940  [2024-12-13 19:15:15.515665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.940  [2024-12-13 19:15:15.515674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.940  [2024-12-13 19:15:15.515684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.940  [2024-12-13 19:15:15.515692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.940  [2024-12-13 19:15:15.515703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.940  [2024-12-13 19:15:15.515711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.940  [2024-12-13 19:15:15.515721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.940  [2024-12-13 19:15:15.515729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.940  [2024-12-13 19:15:15.515739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:89328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.940  [2024-12-13 19:15:15.515748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.940  [2024-12-13 19:15:15.515759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:89336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.940  [2024-12-13 19:15:15.515768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.940  [2024-12-13 19:15:15.515779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.940  [2024-12-13 19:15:15.515787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.940  [2024-12-13 19:15:15.515797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.940  [2024-12-13 19:15:15.515806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.515816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.515824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.515834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.515842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.515852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:89376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.515861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.515871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.515879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.515890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.515898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.515908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.515917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.515927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.515935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.515945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.515954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.515964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.515972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.515982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.515990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.516009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.516027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.516046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:89464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.516066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.516085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.516103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.516122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.516141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.516160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:89512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.516178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:89520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.516197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.516224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.516274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.516294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:89552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.516314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.516334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.516353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:43.941  [2024-12-13 19:15:15.516372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.941  [2024-12-13 19:15:15.516394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.941  [2024-12-13 19:15:15.516414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.941  [2024-12-13 19:15:15.516435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.941  [2024-12-13 19:15:15.516455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.941  [2024-12-13 19:15:15.516474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.941  [2024-12-13 19:15:15.516493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.941  [2024-12-13 19:15:15.516512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.941  [2024-12-13 19:15:15.516531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.941  [2024-12-13 19:15:15.516550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.941  [2024-12-13 19:15:15.516570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.941  [2024-12-13 19:15:15.516590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.941  [2024-12-13 19:15:15.516600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.516624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.516649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.516657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.516667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.516676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.516686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.516694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.516704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.516712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.516722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.516731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.516741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.516749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.516759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.516768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.516778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.516786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.516795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.516804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.516813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.516822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.516832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.516840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.516850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.516858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.516868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.516876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.516886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.516895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.516906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.516914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.516924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.516932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.516942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.516951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.516961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.516969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.516995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.517004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.517021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.517030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.517040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.517049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.517059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.517068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.517078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.517087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.517097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.517105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.517116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.517124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.517135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.517143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.517153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.517161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.517171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.517180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.517195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.517204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.517214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.517223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.517249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.517258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.517269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.517286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.517299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.517308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.517319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.517328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.517339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.517348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.517363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:43.942  [2024-12-13 19:15:15.517373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.517403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.942  [2024-12-13 19:15:15.517414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90048 len:8 PRP1 0x0 PRP2 0x0
00:30:43.942  [2024-12-13 19:15:15.517423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.517436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.942  [2024-12-13 19:15:15.517444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.942  [2024-12-13 19:15:15.517451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90056 len:8 PRP1 0x0 PRP2 0x0
00:30:43.942  [2024-12-13 19:15:15.517460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.942  [2024-12-13 19:15:15.517469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.942  [2024-12-13 19:15:15.517476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.942  [2024-12-13 19:15:15.517484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90064 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.517492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.517501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.517508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.517516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90072 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.517524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.517533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.517544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.517552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90080 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.517561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.517570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.517577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.517584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90088 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.517593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.517602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.517619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.517639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90096 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.517647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.517655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.517661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.517668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90104 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.517681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.517690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.517697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.517704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.517712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.517720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.517764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.517772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90120 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.517781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.517790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.517797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.517805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90128 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.517813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.517822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.517829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.517836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90136 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.517845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.517854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.517866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.517873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90144 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.517882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.517891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.517898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.517905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90152 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.517914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.517922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.517929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.517937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90160 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.517945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.517954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.517961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.517968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90168 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.517981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.517990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.517998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.518005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90176 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.518014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.518023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.518030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.518037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90184 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.518046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.518066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.518073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.518080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90192 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.518088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.518096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.518103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.518109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90200 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.518117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.518125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.518137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.518144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90208 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.518152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.518160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.518167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.518174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90216 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.518182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.518194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.518200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.518207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90224 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.518215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.518223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.518241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.518259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89584 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.518273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.518293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.518301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.518309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89592 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.518318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.518328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.518335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.518343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89600 len:8 PRP1 0x0 PRP2 0x0
00:30:43.943  [2024-12-13 19:15:15.518351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.943  [2024-12-13 19:15:15.518360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.943  [2024-12-13 19:15:15.518367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.943  [2024-12-13 19:15:15.518374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89608 len:8 PRP1 0x0 PRP2 0x0
00:30:43.944  [2024-12-13 19:15:15.518383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.944  [2024-12-13 19:15:15.518391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.944  [2024-12-13 19:15:15.518398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.944  [2024-12-13 19:15:15.518406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89616 len:8 PRP1 0x0 PRP2 0x0
00:30:43.944  [2024-12-13 19:15:15.532401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.944  [2024-12-13 19:15:15.532447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.944  [2024-12-13 19:15:15.532472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.944  [2024-12-13 19:15:15.532485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89624 len:8 PRP1 0x0 PRP2 0x0
00:30:43.944  [2024-12-13 19:15:15.532498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.944  [2024-12-13 19:15:15.532511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.944  [2024-12-13 19:15:15.532521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.944  [2024-12-13 19:15:15.532531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89632 len:8 PRP1 0x0 PRP2 0x0
00:30:43.944  [2024-12-13 19:15:15.532543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.944  [2024-12-13 19:15:15.532555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.944  [2024-12-13 19:15:15.532564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.944  [2024-12-13 19:15:15.532582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89640 len:8 PRP1 0x0 PRP2 0x0
00:30:43.944  [2024-12-13 19:15:15.532594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.944  [2024-12-13 19:15:15.532606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.944  [2024-12-13 19:15:15.532615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.944  [2024-12-13 19:15:15.532625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89648 len:8 PRP1 0x0 PRP2 0x0
00:30:43.944  [2024-12-13 19:15:15.532637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.944  [2024-12-13 19:15:15.532650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:43.944  [2024-12-13 19:15:15.532659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:43.944  [2024-12-13 19:15:15.532669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89656 len:8 PRP1 0x0 PRP2 0x0
00:30:43.944  [2024-12-13 19:15:15.532682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.944  [2024-12-13 19:15:15.532862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:30:43.944  [2024-12-13 19:15:15.532885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.944  [2024-12-13 19:15:15.532900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:30:43.944  [2024-12-13 19:15:15.532913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.944  [2024-12-13 19:15:15.532926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:30:43.944  [2024-12-13 19:15:15.532937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.944  [2024-12-13 19:15:15.532950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:30:43.944  [2024-12-13 19:15:15.532962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:43.944  [2024-12-13 19:15:15.532974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5e90 is same with the state(6) to be set
00:30:43.944  [2024-12-13 19:15:15.533346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller
00:30:43.944  [2024-12-13 19:15:15.533406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5e90 (9): Bad file descriptor
00:30:43.944  [2024-12-13 19:15:15.533534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:30:43.944  [2024-12-13 19:15:15.533562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5e90 with addr=10.0.0.3, port=4420
00:30:43.944  [2024-12-13 19:15:15.533583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5e90 is same with the state(6) to be set
00:30:43.944  [2024-12-13 19:15:15.533607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5e90 (9): Bad file descriptor
00:30:43.944  [2024-12-13 19:15:15.533640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state
00:30:43.944  [2024-12-13 19:15:15.533652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed
00:30:43.944  [2024-12-13 19:15:15.533666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state.
00:30:43.944  [2024-12-13 19:15:15.533680] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed.
00:30:43.944  [2024-12-13 19:15:15.533693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller
00:30:43.944   19:15:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2
00:30:45.816       5575.50 IOPS,    21.78 MiB/s
[2024-12-13T19:15:17.640Z]      3717.00 IOPS,    14.52 MiB/s
[2024-12-13T19:15:17.640Z] [2024-12-13 19:15:17.533847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:30:45.816  [2024-12-13 19:15:17.533916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5e90 with addr=10.0.0.3, port=4420
00:30:45.816  [2024-12-13 19:15:17.533932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5e90 is same with the state(6) to be set
00:30:45.816  [2024-12-13 19:15:17.533954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5e90 (9): Bad file descriptor
00:30:45.816  [2024-12-13 19:15:17.533973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state
00:30:45.816  [2024-12-13 19:15:17.533983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed
00:30:45.816  [2024-12-13 19:15:17.533993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state.
00:30:45.816  [2024-12-13 19:15:17.534004] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed.
00:30:45.816  [2024-12-13 19:15:17.534015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller
00:30:45.816    19:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller
00:30:45.816    19:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:30:45.816    19:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name'
00:30:46.075   19:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]]
00:30:46.075    19:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev
00:30:46.075    19:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name'
00:30:46.075    19:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs
00:30:46.335   19:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]]
00:30:46.335   19:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5
00:30:47.999       2787.75 IOPS,    10.89 MiB/s
[2024-12-13T19:15:19.823Z]      2230.20 IOPS,     8.71 MiB/s
[2024-12-13T19:15:19.823Z] [2024-12-13 19:15:19.534210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:30:47.999  [2024-12-13 19:15:19.534300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5e90 with addr=10.0.0.3, port=4420
00:30:47.999  [2024-12-13 19:15:19.534315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5e90 is same with the state(6) to be set
00:30:47.999  [2024-12-13 19:15:19.534339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5e90 (9): Bad file descriptor
00:30:47.999  [2024-12-13 19:15:19.534358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state
00:30:47.999  [2024-12-13 19:15:19.534367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed
00:30:47.999  [2024-12-13 19:15:19.534377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state.
00:30:47.999  [2024-12-13 19:15:19.534387] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed.
00:30:47.999  [2024-12-13 19:15:19.534397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller
00:30:49.943       1858.50 IOPS,     7.26 MiB/s
[2024-12-13T19:15:21.767Z]      1593.00 IOPS,     6.22 MiB/s
[2024-12-13T19:15:21.767Z] [2024-12-13 19:15:21.534513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state.
00:30:49.943  [2024-12-13 19:15:21.534570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state
00:30:49.943  [2024-12-13 19:15:21.534596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed
00:30:49.943  [2024-12-13 19:15:21.534605] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state
00:30:49.943  [2024-12-13 19:15:21.534616] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed.
00:30:50.879       1393.88 IOPS,     5.44 MiB/s
00:30:50.879                                                                                                  Latency(us)
00:30:50.879  
[2024-12-13T19:15:22.703Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:30:50.879  Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:30:50.879  	 Verification LBA range: start 0x0 length 0x4000
00:30:50.879  	 NVMe0n1             :       8.15    1367.53       5.34      15.70     0.00   92575.86    1966.08 7046430.72
00:30:50.879  
[2024-12-13T19:15:22.703Z]  ===================================================================================================================
00:30:50.879  
[2024-12-13T19:15:22.703Z]  Total                       :               1367.53       5.34      15.70     0.00   92575.86    1966.08 7046430.72
00:30:50.879  {
00:30:50.879    "results": [
00:30:50.879      {
00:30:50.879        "job": "NVMe0n1",
00:30:50.879        "core_mask": "0x4",
00:30:50.879        "workload": "verify",
00:30:50.879        "status": "finished",
00:30:50.879        "verify_range": {
00:30:50.879          "start": 0,
00:30:50.879          "length": 16384
00:30:50.879        },
00:30:50.879        "queue_depth": 128,
00:30:50.879        "io_size": 4096,
00:30:50.879        "runtime": 8.154125,
00:30:50.879        "iops": 1367.528704796652,
00:30:50.879        "mibps": 5.341909003111922,
00:30:50.879        "io_failed": 128,
00:30:50.879        "io_timeout": 0,
00:30:50.879        "avg_latency_us": 92575.85547139093,
00:30:50.879        "min_latency_us": 1966.08,
00:30:50.879        "max_latency_us": 7046430.72
00:30:50.879      }
00:30:50.879    ],
00:30:50.879    "core_count": 1
00:30:50.879  }
00:30:51.446    19:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller
00:30:51.446    19:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:30:51.446    19:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name'
00:30:51.704   19:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]]
00:30:51.704    19:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev
00:30:51.704    19:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs
00:30:51.704    19:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name'
00:30:51.963   19:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]]
00:30:51.963   19:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 119100
00:30:51.963   19:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 119070
00:30:51.963   19:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 119070 ']'
00:30:51.963   19:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 119070
00:30:51.963    19:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname
00:30:51.963   19:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:30:51.963    19:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 119070
00:30:51.963   19:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:30:51.963  killing process with pid 119070
00:30:51.963  Received shutdown signal, test time was about 9.297393 seconds
00:30:51.963  
00:30:51.963                                                                                                  Latency(us)
00:30:51.963  
[2024-12-13T19:15:23.787Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:30:51.963  
[2024-12-13T19:15:23.787Z]  ===================================================================================================================
00:30:51.963  
[2024-12-13T19:15:23.787Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:30:51.963   19:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:30:51.963   19:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 119070'
00:30:51.963   19:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 119070
00:30:51.963   19:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 119070
00:30:52.222   19:15:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:30:52.481  [2024-12-13 19:15:24.139407] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:30:52.481  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:30:52.481   19:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=119260
00:30:52.481   19:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f
00:30:52.481   19:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 119260 /var/tmp/bdevperf.sock
00:30:52.481   19:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 119260 ']'
00:30:52.481   19:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:30:52.481   19:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100
00:30:52.481   19:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:30:52.481   19:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable
00:30:52.481   19:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x
00:30:52.481  [2024-12-13 19:15:24.208475] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:30:52.481  [2024-12-13 19:15:24.208581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119260 ]
00:30:52.739  [2024-12-13 19:15:24.351366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:30:52.739  [2024-12-13 19:15:24.393065] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:30:53.675   19:15:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:30:53.675   19:15:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0
00:30:53.675   19:15:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1
00:30:53.675   19:15:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1
00:30:54.241  NVMe0n1
00:30:54.241   19:15:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=119304
00:30:54.241   19:15:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:30:54.241   19:15:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1
00:30:54.241  Running I/O for 10 seconds...
00:30:55.176   19:15:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:30:55.437      10465.00 IOPS,    40.88 MiB/s
[2024-12-13T19:15:27.261Z] [2024-12-13 19:15:27.048846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.437  [2024-12-13 19:15:27.048907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.437  [2024-12-13 19:15:27.048933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.437  [2024-12-13 19:15:27.048940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.437  [2024-12-13 19:15:27.048948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.437  [2024-12-13 19:15:27.048956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.437  [2024-12-13 19:15:27.048964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.437  [2024-12-13 19:15:27.048971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.437  [2024-12-13 19:15:27.048981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.437  [2024-12-13 19:15:27.048988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.437  [2024-12-13 19:15:27.048995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.437  [2024-12-13 19:15:27.049002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.437  [2024-12-13 19:15:27.049009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.437  [2024-12-13 19:15:27.049016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.437  [2024-12-13 19:15:27.049023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.437  [2024-12-13 19:15:27.049030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.437  [2024-12-13 19:15:27.049037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.437  [2024-12-13 19:15:27.049044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.437  [2024-12-13 19:15:27.049051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.437  [2024-12-13 19:15:27.049058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.437  [2024-12-13 19:15:27.049065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.049490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd52200 is same with the state(6) to be set
00:30:55.438  [2024-12-13 19:15:27.051860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:55.438  [2024-12-13 19:15:27.051912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.438  [2024-12-13 19:15:27.051931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:55.438  [2024-12-13 19:15:27.051940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.438  [2024-12-13 19:15:27.051951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:55.438  [2024-12-13 19:15:27.051960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.438  [2024-12-13 19:15:27.051970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.438  [2024-12-13 19:15:27.051978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.438  [2024-12-13 19:15:27.051988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.438  [2024-12-13 19:15:27.051995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.438  [2024-12-13 19:15:27.052005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.438  [2024-12-13 19:15:27.052012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.438  [2024-12-13 19:15:27.052022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.438  [2024-12-13 19:15:27.052030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.438  [2024-12-13 19:15:27.052039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.438  [2024-12-13 19:15:27.052047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.438  [2024-12-13 19:15:27.052056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.438  [2024-12-13 19:15:27.052064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.438  [2024-12-13 19:15:27.052073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.438  [2024-12-13 19:15:27.052081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.438  [2024-12-13 19:15:27.052090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.438  [2024-12-13 19:15:27.052098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.438  [2024-12-13 19:15:27.052107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.438  [2024-12-13 19:15:27.052114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.438  [2024-12-13 19:15:27.052139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.438  [2024-12-13 19:15:27.052147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.438  [2024-12-13 19:15:27.052173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:55.438  [2024-12-13 19:15:27.052181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.438  [2024-12-13 19:15:27.052191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.438  [2024-12-13 19:15:27.052199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.438  [2024-12-13 19:15:27.052209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.438  [2024-12-13 19:15:27.052217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.438  [2024-12-13 19:15:27.052244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.438  [2024-12-13 19:15:27.052255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.438  [2024-12-13 19:15:27.052266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.438  [2024-12-13 19:15:27.052275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.438  [2024-12-13 19:15:27.052300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.438  [2024-12-13 19:15:27.052310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.438  [2024-12-13 19:15:27.052321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.052983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.052993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.053002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.053012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.053021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.053031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.053040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.053050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.053058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.053068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.053077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.053087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.439  [2024-12-13 19:15:27.053096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.439  [2024-12-13 19:15:27.053106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:55.440  [2024-12-13 19:15:27.053133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:55.440  [2024-12-13 19:15:27.053152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:55.440  [2024-12-13 19:15:27.053170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:55.440  [2024-12-13 19:15:27.053188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:55.440  [2024-12-13 19:15:27.053208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:55.440  [2024-12-13 19:15:27.053242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:30:55.440  [2024-12-13 19:15:27.053273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:30:55.440  [2024-12-13 19:15:27.053888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.440  [2024-12-13 19:15:27.053932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97128 len:8 PRP1 0x0 PRP2 0x0
00:30:55.440  [2024-12-13 19:15:27.053941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.440  [2024-12-13 19:15:27.053960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.440  [2024-12-13 19:15:27.053968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97136 len:8 PRP1 0x0 PRP2 0x0
00:30:55.440  [2024-12-13 19:15:27.053976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.440  [2024-12-13 19:15:27.053985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.440  [2024-12-13 19:15:27.053992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.053999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97144 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.054007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.054016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.441  [2024-12-13 19:15:27.054023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.054031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97152 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.054047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.054055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.441  [2024-12-13 19:15:27.054062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.054070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97160 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.054078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.054087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.441  [2024-12-13 19:15:27.054093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.054101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97168 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.054109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.054117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.441  [2024-12-13 19:15:27.054124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.054131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97176 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.054140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.054148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.441  [2024-12-13 19:15:27.054155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.054162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97184 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.054170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.054179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.441  [2024-12-13 19:15:27.054190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.054198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97192 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.054206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.054215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.441  [2024-12-13 19:15:27.054232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.054257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97200 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.054266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.054275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.441  [2024-12-13 19:15:27.054282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.054290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97208 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.054298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.054307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.441  [2024-12-13 19:15:27.054314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.054321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97216 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.054335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.054344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.441  [2024-12-13 19:15:27.054351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.054359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97224 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.054367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.054376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.441  [2024-12-13 19:15:27.054383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.054391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97232 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.054399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.054408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.441  [2024-12-13 19:15:27.054415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.054422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97240 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.054431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.054439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.441  [2024-12-13 19:15:27.054446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.054454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97248 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.054463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.054472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.441  [2024-12-13 19:15:27.054483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.054491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97256 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.054500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.054509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.441  [2024-12-13 19:15:27.054516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.054523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97264 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.054532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.054540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.441  [2024-12-13 19:15:27.054547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.054554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97272 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.054563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.054572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.441  [2024-12-13 19:15:27.054578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.054586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97280 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.054595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.054604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.441  [2024-12-13 19:15:27.054610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.054618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97288 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.054627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.054635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.441  [2024-12-13 19:15:27.054642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.054650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97296 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.054658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.066018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.441  [2024-12-13 19:15:27.066080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.066106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97304 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.066116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.066125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.441  [2024-12-13 19:15:27.066132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.066139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97312 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.066147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.066155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.441  [2024-12-13 19:15:27.066162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.066169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97320 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.066178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.066185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.441  [2024-12-13 19:15:27.066191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.441  [2024-12-13 19:15:27.066198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97328 len:8 PRP1 0x0 PRP2 0x0
00:30:55.441  [2024-12-13 19:15:27.066206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.441  [2024-12-13 19:15:27.066214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.442  [2024-12-13 19:15:27.066220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.442  [2024-12-13 19:15:27.066226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97336 len:8 PRP1 0x0 PRP2 0x0
00:30:55.442  [2024-12-13 19:15:27.066233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.442  [2024-12-13 19:15:27.066257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.442  [2024-12-13 19:15:27.066275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.442  [2024-12-13 19:15:27.066283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97344 len:8 PRP1 0x0 PRP2 0x0
00:30:55.442  [2024-12-13 19:15:27.066291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.442  [2024-12-13 19:15:27.066301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.442  [2024-12-13 19:15:27.066307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.442  [2024-12-13 19:15:27.066314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97352 len:8 PRP1 0x0 PRP2 0x0
00:30:55.442  [2024-12-13 19:15:27.066322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.442  [2024-12-13 19:15:27.066330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.442  [2024-12-13 19:15:27.066337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.442  [2024-12-13 19:15:27.066344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97360 len:8 PRP1 0x0 PRP2 0x0
00:30:55.442  [2024-12-13 19:15:27.066351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.442  [2024-12-13 19:15:27.066376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:30:55.442  [2024-12-13 19:15:27.066382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:55.442  [2024-12-13 19:15:27.066390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97368 len:8 PRP1 0x0 PRP2 0x0
00:30:55.442  [2024-12-13 19:15:27.066398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.442  [2024-12-13 19:15:27.066544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:30:55.442  [2024-12-13 19:15:27.066560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.442  [2024-12-13 19:15:27.066571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:30:55.442  [2024-12-13 19:15:27.066580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.442  [2024-12-13 19:15:27.066589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:30:55.442  [2024-12-13 19:15:27.066598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.442  [2024-12-13 19:15:27.066607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:30:55.442  [2024-12-13 19:15:27.066615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:55.442  [2024-12-13 19:15:27.066624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fae90 is same with the state(6) to be set
00:30:55.442  [2024-12-13 19:15:27.066857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:30:55.442  [2024-12-13 19:15:27.066892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fae90 (9): Bad file descriptor
00:30:55.442  [2024-12-13 19:15:27.066990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:30:55.442  [2024-12-13 19:15:27.067036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fae90 with addr=10.0.0.3, port=4420
00:30:55.442  [2024-12-13 19:15:27.067048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fae90 is same with the state(6) to be set
00:30:55.442  [2024-12-13 19:15:27.067065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fae90 (9): Bad file descriptor
00:30:55.442  [2024-12-13 19:15:27.067081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:30:55.442  [2024-12-13 19:15:27.067090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:30:55.442  [2024-12-13 19:15:27.067100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:30:55.442  [2024-12-13 19:15:27.067111] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:30:55.442  [2024-12-13 19:15:27.067127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:30:55.442   19:15:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1
00:30:56.378       6022.00 IOPS,    23.52 MiB/s
[2024-12-13T19:15:28.202Z] [2024-12-13 19:15:28.067240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:30:56.378  [2024-12-13 19:15:28.067316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fae90 with addr=10.0.0.3, port=4420
00:30:56.378  [2024-12-13 19:15:28.067330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fae90 is same with the state(6) to be set
00:30:56.378  [2024-12-13 19:15:28.067353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fae90 (9): Bad file descriptor
00:30:56.378  [2024-12-13 19:15:28.067369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:30:56.378  [2024-12-13 19:15:28.067380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:30:56.378  [2024-12-13 19:15:28.067389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:30:56.378  [2024-12-13 19:15:28.067399] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:30:56.378  [2024-12-13 19:15:28.067408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:30:56.378   19:15:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:30:56.636  [2024-12-13 19:15:28.333558] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:30:56.636   19:15:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 119304
00:30:57.462       4014.67 IOPS,    15.68 MiB/s
[2024-12-13T19:15:29.286Z] [2024-12-13 19:15:29.082822] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful.
00:30:59.360       3011.00 IOPS,    11.76 MiB/s
[2024-12-13T19:15:32.125Z]      4188.60 IOPS,    16.36 MiB/s
[2024-12-13T19:15:33.060Z]      5305.83 IOPS,    20.73 MiB/s
[2024-12-13T19:15:33.995Z]      6109.43 IOPS,    23.86 MiB/s
[2024-12-13T19:15:34.929Z]      6705.75 IOPS,    26.19 MiB/s
[2024-12-13T19:15:36.303Z]      7164.44 IOPS,    27.99 MiB/s
[2024-12-13T19:15:36.303Z]      7536.00 IOPS,    29.44 MiB/s
00:31:04.479                                                                                                  Latency(us)
00:31:04.479  
[2024-12-13T19:15:36.303Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:04.479  Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:31:04.479  	 Verification LBA range: start 0x0 length 0x4000
00:31:04.479  	 NVMe0n1             :      10.01    7541.96      29.46       0.00     0.00   16950.32    1824.58 3035150.89
00:31:04.479  
[2024-12-13T19:15:36.303Z]  ===================================================================================================================
00:31:04.479  
[2024-12-13T19:15:36.303Z]  Total                       :               7541.96      29.46       0.00     0.00   16950.32    1824.58 3035150.89
00:31:04.479  {
00:31:04.479    "results": [
00:31:04.479      {
00:31:04.479        "job": "NVMe0n1",
00:31:04.479        "core_mask": "0x4",
00:31:04.479        "workload": "verify",
00:31:04.479        "status": "finished",
00:31:04.479        "verify_range": {
00:31:04.479          "start": 0,
00:31:04.479          "length": 16384
00:31:04.479        },
00:31:04.479        "queue_depth": 128,
00:31:04.479        "io_size": 4096,
00:31:04.479        "runtime": 10.009071,
00:31:04.479        "iops": 7541.95868927296,
00:31:04.479        "mibps": 29.4607761299725,
00:31:04.479        "io_failed": 0,
00:31:04.479        "io_timeout": 0,
00:31:04.479        "avg_latency_us": 16950.31567459247,
00:31:04.479        "min_latency_us": 1824.581818181818,
00:31:04.479        "max_latency_us": 3035150.8945454545
00:31:04.479      }
00:31:04.479    ],
00:31:04.479    "core_count": 1
00:31:04.479  }
00:31:04.479   19:15:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=119421
00:31:04.479   19:15:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:31:04.479   19:15:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1
00:31:04.479  Running I/O for 10 seconds...
00:31:05.417   19:15:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:31:05.417      10156.00 IOPS,    39.67 MiB/s
[2024-12-13T19:15:37.241Z] [2024-12-13 19:15:37.213083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.213497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd506f0 is same with the state(6) to be set
00:31:05.417  [2024-12-13 19:15:37.214883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.417  [2024-12-13 19:15:37.214936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.417  [2024-12-13 19:15:37.214957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.417  [2024-12-13 19:15:37.214966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.417  [2024-12-13 19:15:37.214977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.417  [2024-12-13 19:15:37.214985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.417  [2024-12-13 19:15:37.214996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.417  [2024-12-13 19:15:37.215004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.417  [2024-12-13 19:15:37.215014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.417  [2024-12-13 19:15:37.215022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.417  [2024-12-13 19:15:37.215032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.417  [2024-12-13 19:15:37.215040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.417  [2024-12-13 19:15:37.215049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.417  [2024-12-13 19:15:37.215058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.417  [2024-12-13 19:15:37.215067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.417  [2024-12-13 19:15:37.215075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.418  [2024-12-13 19:15:37.215304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.418  [2024-12-13 19:15:37.215325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.418  [2024-12-13 19:15:37.215345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.418  [2024-12-13 19:15:37.215365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.418  [2024-12-13 19:15:37.215385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.418  [2024-12-13 19:15:37.215405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.418  [2024-12-13 19:15:37.215424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.418  [2024-12-13 19:15:37.215443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.418  [2024-12-13 19:15:37.215464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.418  [2024-12-13 19:15:37.215484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.418  [2024-12-13 19:15:37.215503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.418  [2024-12-13 19:15:37.215523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.418  [2024-12-13 19:15:37.215543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.418  [2024-12-13 19:15:37.215562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.418  [2024-12-13 19:15:37.215581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.418  [2024-12-13 19:15:37.215601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.418  [2024-12-13 19:15:37.215932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.418  [2024-12-13 19:15:37.215942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.215960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.419  [2024-12-13 19:15:37.215969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.215979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.419  [2024-12-13 19:15:37.215988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.215998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.419  [2024-12-13 19:15:37.216007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.419  [2024-12-13 19:15:37.216027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.419  [2024-12-13 19:15:37.216047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.419  [2024-12-13 19:15:37.216066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.419  [2024-12-13 19:15:37.216085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.419  [2024-12-13 19:15:37.216104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.419  [2024-12-13 19:15:37.216123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.419  [2024-12-13 19:15:37.216143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.419  [2024-12-13 19:15:37.216162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.419  [2024-12-13 19:15:37.216181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.419  [2024-12-13 19:15:37.216200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.419  [2024-12-13 19:15:37.216219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.419  [2024-12-13 19:15:37.216265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.419  [2024-12-13 19:15:37.216771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.419  [2024-12-13 19:15:37.216781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.420  [2024-12-13 19:15:37.216790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.216800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.420  [2024-12-13 19:15:37.216809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.216819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.420  [2024-12-13 19:15:37.216828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.216838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.420  [2024-12-13 19:15:37.216847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.216857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.420  [2024-12-13 19:15:37.216865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.216876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.420  [2024-12-13 19:15:37.216884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.216895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:05.420  [2024-12-13 19:15:37.216904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.216914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.216923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.216933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.216942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.216953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.216968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.216978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.216988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.216998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:93592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:93648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:93664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:05.420  [2024-12-13 19:15:37.217539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:05.420  [2024-12-13 19:15:37.217572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:05.420  [2024-12-13 19:15:37.217580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93752 len:8 PRP1 0x0 PRP2 0x0
00:31:05.420  [2024-12-13 19:15:37.217590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:31:05.420  [2024-12-13 19:15:37.217759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.420  [2024-12-13 19:15:37.217771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:31:05.421  [2024-12-13 19:15:37.217785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.421  [2024-12-13 19:15:37.217795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:31:05.421  [2024-12-13 19:15:37.217804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.421  [2024-12-13 19:15:37.217813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:31:05.421  [2024-12-13 19:15:37.217822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:05.421  [2024-12-13 19:15:37.217832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fae90 is same with the state(6) to be set
00:31:05.421  [2024-12-13 19:15:37.218064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller
00:31:05.421  [2024-12-13 19:15:37.218086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fae90 (9): Bad file descriptor
00:31:05.421  [2024-12-13 19:15:37.218182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:05.421  [2024-12-13 19:15:37.218202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fae90 with addr=10.0.0.3, port=4420
00:31:05.421  [2024-12-13 19:15:37.218213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fae90 is same with the state(6) to be set
00:31:05.421  [2024-12-13 19:15:37.218254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fae90 (9): Bad file descriptor
00:31:05.421  [2024-12-13 19:15:37.218273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state
00:31:05.421  [2024-12-13 19:15:37.218282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed
00:31:05.421  [2024-12-13 19:15:37.218293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state.
00:31:05.421  [2024-12-13 19:15:37.218304] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed.
00:31:05.421  [2024-12-13 19:15:37.218314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller
00:31:05.421   19:15:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3
00:31:06.614       5796.00 IOPS,    22.64 MiB/s
[2024-12-13T19:15:38.438Z] [2024-12-13 19:15:38.218414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:06.614  [2024-12-13 19:15:38.218477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fae90 with addr=10.0.0.3, port=4420
00:31:06.614  [2024-12-13 19:15:38.218491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fae90 is same with the state(6) to be set
00:31:06.614  [2024-12-13 19:15:38.218512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fae90 (9): Bad file descriptor
00:31:06.614  [2024-12-13 19:15:38.218530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state
00:31:06.614  [2024-12-13 19:15:38.218540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed
00:31:06.614  [2024-12-13 19:15:38.218565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state.
00:31:06.614  [2024-12-13 19:15:38.218575] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed.
00:31:06.614  [2024-12-13 19:15:38.218585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller
00:31:07.548       3864.00 IOPS,    15.09 MiB/s
[2024-12-13T19:15:39.372Z] [2024-12-13 19:15:39.218679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:07.548  [2024-12-13 19:15:39.218752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fae90 with addr=10.0.0.3, port=4420
00:31:07.548  [2024-12-13 19:15:39.218764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fae90 is same with the state(6) to be set
00:31:07.549  [2024-12-13 19:15:39.218782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fae90 (9): Bad file descriptor
00:31:07.549  [2024-12-13 19:15:39.218796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state
00:31:07.549  [2024-12-13 19:15:39.218805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed
00:31:07.549  [2024-12-13 19:15:39.218814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state.
00:31:07.549  [2024-12-13 19:15:39.218823] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed.
00:31:07.549  [2024-12-13 19:15:39.218832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller
00:31:08.483       2898.00 IOPS,    11.32 MiB/s
[2024-12-13T19:15:40.307Z] [2024-12-13 19:15:40.221970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:08.483  [2024-12-13 19:15:40.222084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fae90 with addr=10.0.0.3, port=4420
00:31:08.483  [2024-12-13 19:15:40.222099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fae90 is same with the state(6) to be set
00:31:08.483  [2024-12-13 19:15:40.222382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fae90 (9): Bad file descriptor
00:31:08.483  [2024-12-13 19:15:40.222642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state
00:31:08.483  [2024-12-13 19:15:40.222662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed
00:31:08.483  [2024-12-13 19:15:40.222673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state.
00:31:08.483  [2024-12-13 19:15:40.222684] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed.
00:31:08.483  [2024-12-13 19:15:40.222696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller
00:31:08.483   19:15:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:31:08.742  [2024-12-13 19:15:40.489622] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:31:08.742   19:15:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 119421
00:31:09.566       2318.40 IOPS,     9.06 MiB/s
[2024-12-13T19:15:41.390Z] [2024-12-13 19:15:41.245450] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful.
00:31:11.437       3365.00 IOPS,    13.14 MiB/s
[2024-12-13T19:15:44.230Z]      4353.86 IOPS,    17.01 MiB/s
[2024-12-13T19:15:45.169Z]      5085.50 IOPS,    19.87 MiB/s
[2024-12-13T19:15:46.104Z]      5657.78 IOPS,    22.10 MiB/s
[2024-12-13T19:15:46.104Z]      6138.60 IOPS,    23.98 MiB/s
00:31:14.280                                                                                                  Latency(us)
00:31:14.280  
[2024-12-13T19:15:46.104Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:14.280  Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:31:14.280  	 Verification LBA range: start 0x0 length 0x4000
00:31:14.280  	 NVMe0n1             :      10.01    6142.45      23.99    4158.92     0.00   12403.36     942.08 3019898.88
00:31:14.280  
[2024-12-13T19:15:46.104Z]  ===================================================================================================================
00:31:14.280  
[2024-12-13T19:15:46.104Z]  Total                       :               6142.45      23.99    4158.92     0.00   12403.36       0.00 3019898.88
00:31:14.280  {
00:31:14.280    "results": [
00:31:14.280      {
00:31:14.280        "job": "NVMe0n1",
00:31:14.280        "core_mask": "0x4",
00:31:14.280        "workload": "verify",
00:31:14.280        "status": "finished",
00:31:14.280        "verify_range": {
00:31:14.280          "start": 0,
00:31:14.280          "length": 16384
00:31:14.280        },
00:31:14.280        "queue_depth": 128,
00:31:14.280        "io_size": 4096,
00:31:14.280        "runtime": 10.007889,
00:31:14.280        "iops": 6142.454217867525,
00:31:14.280        "mibps": 23.99396178854502,
00:31:14.280        "io_failed": 41622,
00:31:14.280        "io_timeout": 0,
00:31:14.280        "avg_latency_us": 12403.355632342631,
00:31:14.280        "min_latency_us": 942.08,
00:31:14.280        "max_latency_us": 3019898.88
00:31:14.280      }
00:31:14.280    ],
00:31:14.280    "core_count": 1
00:31:14.280  }
00:31:14.280   19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 119260
00:31:14.280   19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 119260 ']'
00:31:14.280   19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 119260
00:31:14.280    19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname
00:31:14.539   19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:31:14.539    19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 119260
00:31:14.539   19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:31:14.539   19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:31:14.539  killing process with pid 119260
00:31:14.539   19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 119260'
00:31:14.539   19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 119260
00:31:14.539  Received shutdown signal, test time was about 10.000000 seconds
00:31:14.539  
00:31:14.539                                                                                                  Latency(us)
00:31:14.539  
[2024-12-13T19:15:46.363Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:14.539  
[2024-12-13T19:15:46.363Z]  ===================================================================================================================
00:31:14.539  
[2024-12-13T19:15:46.363Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:31:14.539   19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 119260
00:31:14.539   19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f
00:31:14.539   19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=119541
00:31:14.539   19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 119541 /var/tmp/bdevperf.sock
00:31:14.539   19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 119541 ']'
00:31:14.539   19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:31:14.539   19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100
00:31:14.539  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:31:14.539   19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:31:14.539   19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable
00:31:14.539   19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x
00:31:14.797  [2024-12-13 19:15:46.366702] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:31:14.797  [2024-12-13 19:15:46.366819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119541 ]
00:31:14.797  [2024-12-13 19:15:46.507353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:14.797  [2024-12-13 19:15:46.540416] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:31:15.056   19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:15.056   19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0
00:31:15.056   19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=119557
00:31:15.056   19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 119541 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt
00:31:15.056   19:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9
00:31:15.315   19:15:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2
00:31:15.573  NVMe0n1
00:31:15.573   19:15:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=119606
00:31:15.573   19:15:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:31:15.573   19:15:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1
00:31:15.832  Running I/O for 10 seconds...
00:31:16.767   19:15:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:31:17.030      20572.00 IOPS,    80.36 MiB/s
[2024-12-13T19:15:48.854Z] [2024-12-13 19:15:48.590477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.590994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.591001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.591009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.591017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.591025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.591032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.591040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.591047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.591060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.591068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.591076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.591083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.591090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.591098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.030  [2024-12-13 19:15:48.591106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.591599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd530c0 is same with the state(6) to be set
00:31:17.031  [2024-12-13 19:15:48.592339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.031  [2024-12-13 19:15:48.592380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.031  [2024-12-13 19:15:48.592401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.031  [2024-12-13 19:15:48.592412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.031  [2024-12-13 19:15:48.592424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.031  [2024-12-13 19:15:48.592434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.031  [2024-12-13 19:15:48.592446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.031  [2024-12-13 19:15:48.592455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.031  [2024-12-13 19:15:48.592466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.031  [2024-12-13 19:15:48.592475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.031  [2024-12-13 19:15:48.592486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.031  [2024-12-13 19:15:48.592496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.031  [2024-12-13 19:15:48.592506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.031  [2024-12-13 19:15:48.592515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.031  [2024-12-13 19:15:48.592526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.031  [2024-12-13 19:15:48.592536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.031  [2024-12-13 19:15:48.592547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.031  [2024-12-13 19:15:48.592556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.031  [2024-12-13 19:15:48.592568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.031  [2024-12-13 19:15:48.592576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.031  [2024-12-13 19:15:48.592588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.031  [2024-12-13 19:15:48.592597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.031  [2024-12-13 19:15:48.592608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.031  [2024-12-13 19:15:48.592617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.031  [2024-12-13 19:15:48.592643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.031  [2024-12-13 19:15:48.592652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.592663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.592672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.592682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.592691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.592702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.592711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.592721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.592731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.592742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.592751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.592762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.592771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.592781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.592790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.592801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.592809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.592821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.592830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.592841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.592850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.592861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.592870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.592880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.592889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.592900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.592909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.592920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:118920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.592929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.592939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.592948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.592959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.592968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.592978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.592987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.592998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.593007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.593018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.593027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.593038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.593053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.593065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.593074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.593085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.593094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.593105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.593114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.593125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.593135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.593146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.593154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.593165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.593174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.593185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.593194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.593205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.593214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.593224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.593263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.593276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.593286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.593297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.593306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.593318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.593327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.593348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.593357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.593369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.593378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.593389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.593398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.593409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.593424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.593435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.593444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.593456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.593465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.593476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.593486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.593498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.032  [2024-12-13 19:15:48.593507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.032  [2024-12-13 19:15:48.593519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.593529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.593540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.593549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.593560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.593569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.593580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.593589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.593600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.593610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.593621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.593630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.593641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:129728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.593665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.593676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.593685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.593696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.593705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.593716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.593734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.593747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.593756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.593767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.593780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.593792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.593801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.593812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.593821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.593831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.593840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.593851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.593861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.593871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.593881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.593891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.593900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.593911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.593920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.593931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.593940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.593951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.593960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.593975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.593984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.593995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.594004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.594015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.594024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.594034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.594043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.594054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:58120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.594063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.594074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.594083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.594094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.594104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.594115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.594124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.594135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:119584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.594143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.594154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.594162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.594173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.594182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.594192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.594201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.594212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.594230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.594243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.594252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.594262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.594271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.594282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.594291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.594307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.594316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.594326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.594336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.594346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.594355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.594366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.033  [2024-12-13 19:15:48.594374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.033  [2024-12-13 19:15:48.594385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.034  [2024-12-13 19:15:48.594394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.034  [2024-12-13 19:15:48.594405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.034  [2024-12-13 19:15:48.594413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.034  [2024-12-13 19:15:48.594424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.034  [2024-12-13 19:15:48.594434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.034  [2024-12-13 19:15:48.594445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:17.034  [2024-12-13 19:15:48.594453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.034  [2024-12-13 19:15:48.594480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.034  [2024-12-13 19:15:48.594491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90784 len:8 PRP1 0x0 PRP2 0x0
00:31:17.034  [2024-12-13 19:15:48.594500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.034  [2024-12-13 19:15:48.594513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.034  [2024-12-13 19:15:48.594521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.034  [2024-12-13 19:15:48.594529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115168 len:8 PRP1 0x0 PRP2 0x0
00:31:17.034  [2024-12-13 19:15:48.594537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.034  [2024-12-13 19:15:48.594546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.034  [2024-12-13 19:15:48.594554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.034  [2024-12-13 19:15:48.594562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15256 len:8 PRP1 0x0 PRP2 0x0
00:31:17.034  [2024-12-13 19:15:48.594570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.034  [2024-12-13 19:15:48.594579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.034  [2024-12-13 19:15:48.594586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.034  [2024-12-13 19:15:48.594594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42808 len:8 PRP1 0x0 PRP2 0x0
00:31:17.034  [2024-12-13 19:15:48.594603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.034  [2024-12-13 19:15:48.594612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.034  [2024-12-13 19:15:48.594624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.034  [2024-12-13 19:15:48.594631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18488 len:8 PRP1 0x0 PRP2 0x0
00:31:17.034  [2024-12-13 19:15:48.594640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.034  [2024-12-13 19:15:48.594649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.034  [2024-12-13 19:15:48.594657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.034  [2024-12-13 19:15:48.594664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94576 len:8 PRP1 0x0 PRP2 0x0
00:31:17.034  [2024-12-13 19:15:48.594673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.034  [2024-12-13 19:15:48.594683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.034  [2024-12-13 19:15:48.594690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.034  [2024-12-13 19:15:48.594698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118432 len:8 PRP1 0x0 PRP2 0x0
00:31:17.034  [2024-12-13 19:15:48.594706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.034  [2024-12-13 19:15:48.594715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.034  [2024-12-13 19:15:48.594723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.034  [2024-12-13 19:15:48.594730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33104 len:8 PRP1 0x0 PRP2 0x0
00:31:17.034  [2024-12-13 19:15:48.594739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.034  [2024-12-13 19:15:48.594748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.034  [2024-12-13 19:15:48.594755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.034  [2024-12-13 19:15:48.594763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87592 len:8 PRP1 0x0 PRP2 0x0
00:31:17.034  [2024-12-13 19:15:48.594771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.034  [2024-12-13 19:15:48.594780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.034  [2024-12-13 19:15:48.594787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.034  [2024-12-13 19:15:48.594795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53192 len:8 PRP1 0x0 PRP2 0x0
00:31:17.034  [2024-12-13 19:15:48.594803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.034  [2024-12-13 19:15:48.594812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.034  [2024-12-13 19:15:48.594819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.034  [2024-12-13 19:15:48.594826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80800 len:8 PRP1 0x0 PRP2 0x0
00:31:17.034  [2024-12-13 19:15:48.594834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.034  [2024-12-13 19:15:48.594843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.034  [2024-12-13 19:15:48.594850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.034  [2024-12-13 19:15:48.594858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16048 len:8 PRP1 0x0 PRP2 0x0
00:31:17.034  [2024-12-13 19:15:48.594866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.034  [2024-12-13 19:15:48.594876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.034  [2024-12-13 19:15:48.594887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.034  [2024-12-13 19:15:48.594895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89616 len:8 PRP1 0x0 PRP2 0x0
00:31:17.034  [2024-12-13 19:15:48.594904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.034  [2024-12-13 19:15:48.594913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.034  [2024-12-13 19:15:48.594920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.034  [2024-12-13 19:15:48.594928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115024 len:8 PRP1 0x0 PRP2 0x0
00:31:17.034  [2024-12-13 19:15:48.594937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.034  [2024-12-13 19:15:48.594946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.034  [2024-12-13 19:15:48.594953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.034  [2024-12-13 19:15:48.594961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:760 len:8 PRP1 0x0 PRP2 0x0
00:31:17.034  [2024-12-13 19:15:48.594969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.034  [2024-12-13 19:15:48.594978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.034  [2024-12-13 19:15:48.594985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.034  [2024-12-13 19:15:48.594993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64672 len:8 PRP1 0x0 PRP2 0x0
00:31:17.034  [2024-12-13 19:15:48.595001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.034  [2024-12-13 19:15:48.595010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.034  [2024-12-13 19:15:48.595017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.034  [2024-12-13 19:15:48.595025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78608 len:8 PRP1 0x0 PRP2 0x0
00:31:17.034  [2024-12-13 19:15:48.595033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.034  [2024-12-13 19:15:48.595042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.034  [2024-12-13 19:15:48.595049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.034  [2024-12-13 19:15:48.595056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49176 len:8 PRP1 0x0 PRP2 0x0
00:31:17.034  [2024-12-13 19:15:48.595065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.035  [2024-12-13 19:15:48.595074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.035  [2024-12-13 19:15:48.595081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.035  [2024-12-13 19:15:48.595088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20112 len:8 PRP1 0x0 PRP2 0x0
00:31:17.035  [2024-12-13 19:15:48.595097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.035  [2024-12-13 19:15:48.595106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.035  [2024-12-13 19:15:48.595112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.035  [2024-12-13 19:15:48.595120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21992 len:8 PRP1 0x0 PRP2 0x0
00:31:17.035  [2024-12-13 19:15:48.595129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.035  [2024-12-13 19:15:48.595138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.035  [2024-12-13 19:15:48.595145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.035  [2024-12-13 19:15:48.595153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80344 len:8 PRP1 0x0 PRP2 0x0
00:31:17.035  [2024-12-13 19:15:48.595161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.035  [2024-12-13 19:15:48.595170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.035  [2024-12-13 19:15:48.595177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.035  [2024-12-13 19:15:48.595184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27552 len:8 PRP1 0x0 PRP2 0x0
00:31:17.035   19:15:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 119606
00:31:17.035  [2024-12-13 19:15:48.612461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.035  [2024-12-13 19:15:48.612514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.035  [2024-12-13 19:15:48.612528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.035  [2024-12-13 19:15:48.612542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106952 len:8 PRP1 0x0 PRP2 0x0
00:31:17.035  [2024-12-13 19:15:48.612555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.035  [2024-12-13 19:15:48.612569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.035  [2024-12-13 19:15:48.612579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.035  [2024-12-13 19:15:48.612590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94912 len:8 PRP1 0x0 PRP2 0x0
00:31:17.035  [2024-12-13 19:15:48.612602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.035  [2024-12-13 19:15:48.612625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.035  [2024-12-13 19:15:48.612635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.035  [2024-12-13 19:15:48.612645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47224 len:8 PRP1 0x0 PRP2 0x0
00:31:17.035  [2024-12-13 19:15:48.612657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.035  [2024-12-13 19:15:48.612670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.035  [2024-12-13 19:15:48.612679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.035  [2024-12-13 19:15:48.612690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30976 len:8 PRP1 0x0 PRP2 0x0
00:31:17.035  [2024-12-13 19:15:48.612702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.035  [2024-12-13 19:15:48.612715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.035  [2024-12-13 19:15:48.612725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.035  [2024-12-13 19:15:48.612736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2440 len:8 PRP1 0x0 PRP2 0x0
00:31:17.035  [2024-12-13 19:15:48.612748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.035  [2024-12-13 19:15:48.612761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.035  [2024-12-13 19:15:48.612770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.035  [2024-12-13 19:15:48.612781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121072 len:8 PRP1 0x0 PRP2 0x0
00:31:17.035  [2024-12-13 19:15:48.612793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.035  [2024-12-13 19:15:48.612806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.035  [2024-12-13 19:15:48.612815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.035  [2024-12-13 19:15:48.612826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26272 len:8 PRP1 0x0 PRP2 0x0
00:31:17.035  [2024-12-13 19:15:48.612838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.035  [2024-12-13 19:15:48.612851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:17.035  [2024-12-13 19:15:48.612860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:17.035  [2024-12-13 19:15:48.612871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115352 len:8 PRP1 0x0 PRP2 0x0
00:31:17.035  [2024-12-13 19:15:48.612882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.035  [2024-12-13 19:15:48.613098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:31:17.035  [2024-12-13 19:15:48.613134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.035  [2024-12-13 19:15:48.613151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:31:17.035  [2024-12-13 19:15:48.613165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.035  [2024-12-13 19:15:48.613179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:31:17.035  [2024-12-13 19:15:48.613191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.035  [2024-12-13 19:15:48.613205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:31:17.035  [2024-12-13 19:15:48.613217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:17.035  [2024-12-13 19:15:48.613261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1e90 is same with the state(6) to be set
00:31:17.035  [2024-12-13 19:15:48.613588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller
00:31:17.035  [2024-12-13 19:15:48.613632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c1e90 (9): Bad file descriptor
00:31:17.035  [2024-12-13 19:15:48.613793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:17.035  [2024-12-13 19:15:48.613823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c1e90 with addr=10.0.0.3, port=4420
00:31:17.035  [2024-12-13 19:15:48.613838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1e90 is same with the state(6) to be set
00:31:17.035  [2024-12-13 19:15:48.613863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c1e90 (9): Bad file descriptor
00:31:17.035  [2024-12-13 19:15:48.613885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state
00:31:17.035  [2024-12-13 19:15:48.613898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed
00:31:17.035  [2024-12-13 19:15:48.613912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state.
00:31:17.035  [2024-12-13 19:15:48.613926] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed.
00:31:17.035  [2024-12-13 19:15:48.613940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller
00:31:18.979      12204.50 IOPS,    47.67 MiB/s
[2024-12-13T19:15:50.803Z]      8136.33 IOPS,    31.78 MiB/s
[2024-12-13T19:15:50.803Z] [2024-12-13 19:15:50.614116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:18.979  [2024-12-13 19:15:50.614185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c1e90 with addr=10.0.0.3, port=4420
00:31:18.979  [2024-12-13 19:15:50.614199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1e90 is same with the state(6) to be set
00:31:18.979  [2024-12-13 19:15:50.614233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c1e90 (9): Bad file descriptor
00:31:18.979  [2024-12-13 19:15:50.614270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state
00:31:18.979  [2024-12-13 19:15:50.614280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed
00:31:18.979  [2024-12-13 19:15:50.614290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state.
00:31:18.979  [2024-12-13 19:15:50.614301] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed.
00:31:18.979  [2024-12-13 19:15:50.614313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller
00:31:20.852       6102.25 IOPS,    23.84 MiB/s
[2024-12-13T19:15:52.676Z]      4881.80 IOPS,    19.07 MiB/s
[2024-12-13T19:15:52.676Z] [2024-12-13 19:15:52.614461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:20.852  [2024-12-13 19:15:52.614522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c1e90 with addr=10.0.0.3, port=4420
00:31:20.852  [2024-12-13 19:15:52.614537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1e90 is same with the state(6) to be set
00:31:20.852  [2024-12-13 19:15:52.614560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c1e90 (9): Bad file descriptor
00:31:20.852  [2024-12-13 19:15:52.614578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state
00:31:20.852  [2024-12-13 19:15:52.614587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed
00:31:20.852  [2024-12-13 19:15:52.614598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state.
00:31:20.852  [2024-12-13 19:15:52.614608] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed.
00:31:20.852  [2024-12-13 19:15:52.614618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller
00:31:22.723       4068.17 IOPS,    15.89 MiB/s
[2024-12-13T19:15:54.805Z]      3487.00 IOPS,    13.62 MiB/s
[2024-12-13T19:15:54.805Z] [2024-12-13 19:15:54.614682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state.
00:31:22.981  [2024-12-13 19:15:54.614733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state
00:31:22.981  [2024-12-13 19:15:54.614751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed
00:31:22.982  [2024-12-13 19:15:54.614771] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state
00:31:22.982  [2024-12-13 19:15:54.614782] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed.
00:31:23.916       3051.12 IOPS,    11.92 MiB/s
00:31:23.916                                                                                                  Latency(us)
00:31:23.916  
[2024-12-13T19:15:55.740Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:23.916  Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096)
00:31:23.916  	 NVMe0n1             :       8.21    2973.93      11.62      15.60     0.00   42849.22    2844.86 7046430.72
00:31:23.916  
[2024-12-13T19:15:55.740Z]  ===================================================================================================================
00:31:23.916  
[2024-12-13T19:15:55.740Z]  Total                       :               2973.93      11.62      15.60     0.00   42849.22    2844.86 7046430.72
00:31:23.916  {
00:31:23.916    "results": [
00:31:23.916      {
00:31:23.916        "job": "NVMe0n1",
00:31:23.916        "core_mask": "0x4",
00:31:23.916        "workload": "randread",
00:31:23.916        "status": "finished",
00:31:23.916        "queue_depth": 128,
00:31:23.916        "io_size": 4096,
00:31:23.916        "runtime": 8.207667,
00:31:23.916        "iops": 2973.9266956127726,
00:31:23.916        "mibps": 11.616901154737393,
00:31:23.916        "io_failed": 128,
00:31:23.916        "io_timeout": 0,
00:31:23.916        "avg_latency_us": 42849.219847725326,
00:31:23.916        "min_latency_us": 2844.858181818182,
00:31:23.916        "max_latency_us": 7046430.72
00:31:23.916      }
00:31:23.916    ],
00:31:23.916    "core_count": 1
00:31:23.916  }
00:31:23.916   19:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:31:23.916  Attaching 5 probes...
00:31:23.916  1435.380136: reset bdev controller NVMe0
00:31:23.916  1435.478223: reconnect bdev controller NVMe0
00:31:23.916  3435.776616: reconnect delay bdev controller NVMe0
00:31:23.916  3435.812014: reconnect bdev controller NVMe0
00:31:23.916  5436.139566: reconnect delay bdev controller NVMe0
00:31:23.916  5436.173691: reconnect bdev controller NVMe0
00:31:23.916  7436.447299: reconnect delay bdev controller NVMe0
00:31:23.916  7436.479416: reconnect bdev controller NVMe0
00:31:23.917    19:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0'
00:31:23.917   19:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 ))
00:31:23.917   19:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 119557
00:31:23.917   19:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt
00:31:23.917   19:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 119541
00:31:23.917   19:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 119541 ']'
00:31:23.917   19:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 119541
00:31:23.917    19:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname
00:31:23.917   19:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:31:23.917    19:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 119541
00:31:23.917  killing process with pid 119541
00:31:23.917  Received shutdown signal, test time was about 8.276470 seconds
00:31:23.917  
00:31:23.917                                                                                                  Latency(us)
00:31:23.917  
[2024-12-13T19:15:55.741Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:23.917  
[2024-12-13T19:15:55.741Z]  ===================================================================================================================
00:31:23.917  
[2024-12-13T19:15:55.741Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:31:23.917   19:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:31:23.917   19:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:31:23.917   19:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 119541'
00:31:23.917   19:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 119541
00:31:23.917   19:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 119541
00:31:24.176   19:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:31:24.435   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT
00:31:24.435   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini
00:31:24.435   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup
00:31:24.435   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync
00:31:24.435   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:31:24.435   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e
00:31:24.435   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20}
00:31:24.435   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:31:24.435  rmmod nvme_tcp
00:31:24.435  rmmod nvme_fabrics
00:31:24.435  rmmod nvme_keyring
00:31:24.435   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:31:24.435   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e
00:31:24.435   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0
00:31:24.435   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 118980 ']'
00:31:24.435   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 118980
00:31:24.435   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 118980 ']'
00:31:24.435   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 118980
00:31:24.435    19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname
00:31:24.693   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:31:24.693    19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 118980
00:31:24.693  killing process with pid 118980
00:31:24.693   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:31:24.693   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:31:24.693   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 118980'
00:31:24.693   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 118980
00:31:24.693   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 118980
00:31:24.693   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:31:24.693   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:31:24.693   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:31:24.693   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr
00:31:24.693   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save
00:31:24.693   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:31:24.693   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore
00:31:24.693   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:31:24.693   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:31:24.693   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:31:24.693   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:31:24.952   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:31:24.952   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:31:24.952   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:31:24.952   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:31:24.952   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:31:24.952   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:31:24.952   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:31:24.952   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:31:24.952   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:31:24.952   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:31:24.952   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:31:24.952   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns
00:31:24.952   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:31:24.952   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:31:24.952    19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:31:24.952   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0
00:31:24.952  
00:31:24.952  real	0m46.292s
00:31:24.952  user	2m15.725s
00:31:24.952  sys	0m4.789s
00:31:24.952   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable
00:31:24.952   19:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x
00:31:24.952  ************************************
00:31:24.952  END TEST nvmf_timeout
00:31:24.952  ************************************
00:31:24.952   19:15:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]]
00:31:24.952   19:15:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT
00:31:24.952  
00:31:24.952  real	6m24.686s
00:31:24.952  user	17m39.374s
00:31:24.952  sys	1m13.140s
00:31:24.953   19:15:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable
00:31:24.953   19:15:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:31:24.953  ************************************
00:31:24.953  END TEST nvmf_host
00:31:24.953  ************************************
00:31:25.212   19:15:56 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]]
00:31:25.212   19:15:56 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]]
00:31:25.212   19:15:56 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode
00:31:25.212   19:15:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:31:25.212   19:15:56 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable
00:31:25.212   19:15:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:31:25.212  ************************************
00:31:25.212  START TEST nvmf_target_core_interrupt_mode
00:31:25.212  ************************************
00:31:25.212   19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode
00:31:25.212  * Looking for test storage...
00:31:25.212  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf
00:31:25.212    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:31:25.212     19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version
00:31:25.212     19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:31:25.212    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:31:25.212    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:31:25.212    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l
00:31:25.212    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l
00:31:25.212    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-:
00:31:25.212    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1
00:31:25.212    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-:
00:31:25.212    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2
00:31:25.212    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<'
00:31:25.212    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2
00:31:25.212    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1
00:31:25.212    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:31:25.212    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in
00:31:25.212    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1
00:31:25.212    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 ))
00:31:25.212    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:31:25.212     19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1
00:31:25.212     19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1
00:31:25.212     19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:31:25.212     19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1
00:31:25.212    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1
00:31:25.212     19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2
00:31:25.212     19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2
00:31:25.212     19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:31:25.212     19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2
00:31:25.212    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2
00:31:25.212    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:31:25.212    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:31:25.212    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0
00:31:25.212    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:31:25.213  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:25.213  		--rc genhtml_branch_coverage=1
00:31:25.213  		--rc genhtml_function_coverage=1
00:31:25.213  		--rc genhtml_legend=1
00:31:25.213  		--rc geninfo_all_blocks=1
00:31:25.213  		--rc geninfo_unexecuted_blocks=1
00:31:25.213  		
00:31:25.213  		'
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:31:25.213  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:25.213  		--rc genhtml_branch_coverage=1
00:31:25.213  		--rc genhtml_function_coverage=1
00:31:25.213  		--rc genhtml_legend=1
00:31:25.213  		--rc geninfo_all_blocks=1
00:31:25.213  		--rc geninfo_unexecuted_blocks=1
00:31:25.213  		
00:31:25.213  		'
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:31:25.213  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:25.213  		--rc genhtml_branch_coverage=1
00:31:25.213  		--rc genhtml_function_coverage=1
00:31:25.213  		--rc genhtml_legend=1
00:31:25.213  		--rc geninfo_all_blocks=1
00:31:25.213  		--rc geninfo_unexecuted_blocks=1
00:31:25.213  		
00:31:25.213  		'
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:31:25.213  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:25.213  		--rc genhtml_branch_coverage=1
00:31:25.213  		--rc genhtml_function_coverage=1
00:31:25.213  		--rc genhtml_legend=1
00:31:25.213  		--rc geninfo_all_blocks=1
00:31:25.213  		--rc geninfo_unexecuted_blocks=1
00:31:25.213  		
00:31:25.213  		'
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s
00:31:25.213   19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']'
00:31:25.213   19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:31:25.213     19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:31:25.213     19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:31:25.213     19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob
00:31:25.213     19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:31:25.213     19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:31:25.213     19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:31:25.213      19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:25.213      19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:25.213      19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:25.213      19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH
00:31:25.213      19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:31:25.213    19:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0
00:31:25.213   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT
00:31:25.213   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@")
00:31:25.213   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]]
00:31:25.213   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode
00:31:25.213   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:31:25.213   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:31:25.213   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:31:25.213  ************************************
00:31:25.213  START TEST nvmf_abort
00:31:25.213  ************************************
00:31:25.213   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode
00:31:25.473  * Looking for test storage...
00:31:25.473  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:31:25.473    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:31:25.473     19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version
00:31:25.473     19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:31:25.473    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:31:25.473    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:31:25.473    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l
00:31:25.473    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l
00:31:25.473    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-:
00:31:25.473    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1
00:31:25.473    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-:
00:31:25.473    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2
00:31:25.473    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<'
00:31:25.473    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2
00:31:25.473    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1
00:31:25.473    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:31:25.473    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in
00:31:25.473    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1
00:31:25.473    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 ))
00:31:25.473    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:31:25.473     19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1
00:31:25.473     19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1
00:31:25.473     19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:31:25.473     19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1
00:31:25.473    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1
00:31:25.473     19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2
00:31:25.473     19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2
00:31:25.473     19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:31:25.474     19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:31:25.474  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:25.474  		--rc genhtml_branch_coverage=1
00:31:25.474  		--rc genhtml_function_coverage=1
00:31:25.474  		--rc genhtml_legend=1
00:31:25.474  		--rc geninfo_all_blocks=1
00:31:25.474  		--rc geninfo_unexecuted_blocks=1
00:31:25.474  		
00:31:25.474  		'
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:31:25.474  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:25.474  		--rc genhtml_branch_coverage=1
00:31:25.474  		--rc genhtml_function_coverage=1
00:31:25.474  		--rc genhtml_legend=1
00:31:25.474  		--rc geninfo_all_blocks=1
00:31:25.474  		--rc geninfo_unexecuted_blocks=1
00:31:25.474  		
00:31:25.474  		'
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:31:25.474  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:25.474  		--rc genhtml_branch_coverage=1
00:31:25.474  		--rc genhtml_function_coverage=1
00:31:25.474  		--rc genhtml_legend=1
00:31:25.474  		--rc geninfo_all_blocks=1
00:31:25.474  		--rc geninfo_unexecuted_blocks=1
00:31:25.474  		
00:31:25.474  		'
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:31:25.474  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:25.474  		--rc genhtml_branch_coverage=1
00:31:25.474  		--rc genhtml_function_coverage=1
00:31:25.474  		--rc genhtml_legend=1
00:31:25.474  		--rc geninfo_all_blocks=1
00:31:25.474  		--rc geninfo_unexecuted_blocks=1
00:31:25.474  		
00:31:25.474  		'
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:31:25.474     19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:31:25.474     19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:31:25.474     19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob
00:31:25.474     19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:31:25.474     19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:31:25.474     19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:31:25.474      19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:25.474      19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:25.474      19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:25.474      19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH
00:31:25.474      19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:31:25.474    19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:31:25.474   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:31:25.475   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:31:25.475   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:31:25.475   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:31:25.475   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:31:25.475   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:31:25.475   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:31:25.475   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:31:25.475   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:31:25.475  Cannot find device "nvmf_init_br"
00:31:25.475   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # true
00:31:25.475   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:31:25.475  Cannot find device "nvmf_init_br2"
00:31:25.475   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # true
00:31:25.475   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:31:25.475  Cannot find device "nvmf_tgt_br"
00:31:25.475   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # true
00:31:25.475   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:31:25.475  Cannot find device "nvmf_tgt_br2"
00:31:25.475   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # true
00:31:25.475   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:31:25.734  Cannot find device "nvmf_init_br"
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # true
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:31:25.734  Cannot find device "nvmf_init_br2"
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # true
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:31:25.734  Cannot find device "nvmf_tgt_br"
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # true
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:31:25.734  Cannot find device "nvmf_tgt_br2"
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # true
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:31:25.734  Cannot find device "nvmf_br"
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # true
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:31:25.734  Cannot find device "nvmf_init_if"
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # true
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:31:25.734  Cannot find device "nvmf_init_if2"
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # true
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:31:25.734  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # true
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:31:25.734  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # true
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:31:25.734   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:31:25.993  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:31:25.993  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms
00:31:25.993  
00:31:25.993  --- 10.0.0.3 ping statistics ---
00:31:25.993  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:31:25.993  rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:31:25.993  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:31:25.993  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms
00:31:25.993  
00:31:25.993  --- 10.0.0.4 ping statistics ---
00:31:25.993  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:31:25.993  rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:31:25.993  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:31:25.993  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms
00:31:25.993  
00:31:25.993  --- 10.0.0.1 ping statistics ---
00:31:25.993  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:31:25.993  rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:31:25.993  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:31:25.993  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms
00:31:25.993  
00:31:25.993  --- 10.0.0.2 ping statistics ---
00:31:25.993  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:31:25.993  rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@461 -- # return 0
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=120019
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 120019
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 120019 ']'
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100
00:31:25.993  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable
00:31:25.993   19:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:31:25.993  [2024-12-13 19:15:57.740250] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:31:25.993  [2024-12-13 19:15:57.741543] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:31:25.993  [2024-12-13 19:15:57.741620] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:31:26.252  [2024-12-13 19:15:57.898603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:31:26.252  [2024-12-13 19:15:57.940387] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:31:26.252  [2024-12-13 19:15:57.940455] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:31:26.252  [2024-12-13 19:15:57.940472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:31:26.252  [2024-12-13 19:15:57.940483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:31:26.252  [2024-12-13 19:15:57.940494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:31:26.252  [2024-12-13 19:15:57.941787] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:31:26.252  [2024-12-13 19:15:57.941876] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:31:26.252  [2024-12-13 19:15:57.941891] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:31:26.252  [2024-12-13 19:15:58.044557] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:31:26.252  [2024-12-13 19:15:58.044699] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:31:26.252  [2024-12-13 19:15:58.044818] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:31:26.252  [2024-12-13 19:15:58.044949] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:31:26.512  [2024-12-13 19:15:58.123480] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:31:26.512  Malloc0
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:31:26.512  Delay0
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:31:26.512  [2024-12-13 19:15:58.204071] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:31:26.512   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:26.513   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420
00:31:26.513   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:26.513   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:31:26.513   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:26.513   19:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128
00:31:26.776  [2024-12-13 19:15:58.394943] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral
00:31:28.678  Initializing NVMe Controllers
00:31:28.678  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0
00:31:28.678  controller IO queue size 128 less than required
00:31:28.678  Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver.
00:31:28.678  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0
00:31:28.678  Initialization complete. Launching workers.
00:31:28.678  NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31262
00:31:28.678  CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31319, failed to submit 66
00:31:28.678  	 success 31262, unsuccessful 57, failed 0
00:31:28.678   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:31:28.678   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:28.678   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:31:28.678   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:28.678   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT
00:31:28.678   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini
00:31:28.678   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup
00:31:28.678   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync
00:31:28.678   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:31:28.678   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e
00:31:28.678   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20}
00:31:28.678   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:31:28.678  rmmod nvme_tcp
00:31:28.937  rmmod nvme_fabrics
00:31:28.937  rmmod nvme_keyring
00:31:28.937   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:31:28.937   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e
00:31:28.937   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0
00:31:28.937   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 120019 ']'
00:31:28.937   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 120019
00:31:28.937   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 120019 ']'
00:31:28.937   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 120019
00:31:28.937    19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname
00:31:28.937   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:31:28.937    19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 120019
00:31:28.937   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:31:28.937   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:31:28.937   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 120019'
00:31:28.937  killing process with pid 120019
00:31:28.937   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 120019
00:31:28.937   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 120019
00:31:29.195   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:31:29.195   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:31:29.195   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:31:29.195   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr
00:31:29.195   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save
00:31:29.195   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:31:29.195   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore
00:31:29.195   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:31:29.195   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:31:29.195   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:31:29.195   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:31:29.195   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:31:29.195   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:31:29.195   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:31:29.195   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:31:29.195   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:31:29.195   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:31:29.195   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:31:29.195   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:31:29.195   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:31:29.195   19:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:31:29.455   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:31:29.455   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns
00:31:29.455   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:31:29.455   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:31:29.455    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:31:29.455   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # return 0
00:31:29.455  
00:31:29.455  real	0m4.058s
00:31:29.455  user	0m9.027s
00:31:29.455  sys	0m1.511s
00:31:29.455  ************************************
00:31:29.455  END TEST nvmf_abort
00:31:29.455  ************************************
00:31:29.455   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable
00:31:29.455   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:31:29.455   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode
00:31:29.455   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:31:29.455   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:31:29.455   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:31:29.455  ************************************
00:31:29.455  START TEST nvmf_ns_hotplug_stress
00:31:29.455  ************************************
00:31:29.455   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode
00:31:29.455  * Looking for test storage...
00:31:29.455  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:31:29.455    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:31:29.455     19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version
00:31:29.455     19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:31:29.455    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:31:29.455    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:31:29.455    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l
00:31:29.455    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l
00:31:29.455    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-:
00:31:29.455    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1
00:31:29.455    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-:
00:31:29.455    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2
00:31:29.455    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<'
00:31:29.455    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2
00:31:29.455    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1
00:31:29.455    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:31:29.455    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in
00:31:29.455    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1
00:31:29.455    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 ))
00:31:29.455    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:31:29.455     19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1
00:31:29.715     19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1
00:31:29.715     19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:31:29.715     19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1
00:31:29.715    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1
00:31:29.715     19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2
00:31:29.715     19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2
00:31:29.715     19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:31:29.715     19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2
00:31:29.715    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2
00:31:29.715    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:31:29.715    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:31:29.715    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0
00:31:29.715    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:31:29.715    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:31:29.715  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:29.715  		--rc genhtml_branch_coverage=1
00:31:29.715  		--rc genhtml_function_coverage=1
00:31:29.715  		--rc genhtml_legend=1
00:31:29.715  		--rc geninfo_all_blocks=1
00:31:29.715  		--rc geninfo_unexecuted_blocks=1
00:31:29.715  		
00:31:29.715  		'
00:31:29.715    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:31:29.715  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:29.715  		--rc genhtml_branch_coverage=1
00:31:29.715  		--rc genhtml_function_coverage=1
00:31:29.715  		--rc genhtml_legend=1
00:31:29.715  		--rc geninfo_all_blocks=1
00:31:29.715  		--rc geninfo_unexecuted_blocks=1
00:31:29.715  		
00:31:29.715  		'
00:31:29.715    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:31:29.715  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:29.715  		--rc genhtml_branch_coverage=1
00:31:29.715  		--rc genhtml_function_coverage=1
00:31:29.715  		--rc genhtml_legend=1
00:31:29.715  		--rc geninfo_all_blocks=1
00:31:29.715  		--rc geninfo_unexecuted_blocks=1
00:31:29.715  		
00:31:29.715  		'
00:31:29.715    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:31:29.715  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:29.715  		--rc genhtml_branch_coverage=1
00:31:29.715  		--rc genhtml_function_coverage=1
00:31:29.715  		--rc genhtml_legend=1
00:31:29.715  		--rc geninfo_all_blocks=1
00:31:29.715  		--rc geninfo_unexecuted_blocks=1
00:31:29.715  		
00:31:29.715  		'
00:31:29.715   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:31:29.715     19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s
00:31:29.715    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:31:29.715    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:31:29.716     19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:31:29.716     19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob
00:31:29.716     19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:31:29.716     19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:31:29.716     19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:31:29.716      19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:29.716      19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:29.716      19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:29.716      19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH
00:31:29.716      19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:31:29.716    19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:31:29.716  Cannot find device "nvmf_init_br"
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:31:29.716  Cannot find device "nvmf_init_br2"
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:31:29.716  Cannot find device "nvmf_tgt_br"
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:31:29.716  Cannot find device "nvmf_tgt_br2"
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:31:29.716  Cannot find device "nvmf_init_br"
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:31:29.716  Cannot find device "nvmf_init_br2"
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true
00:31:29.716   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:31:29.716  Cannot find device "nvmf_tgt_br"
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:31:29.717  Cannot find device "nvmf_tgt_br2"
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:31:29.717  Cannot find device "nvmf_br"
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:31:29.717  Cannot find device "nvmf_init_if"
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:31:29.717  Cannot find device "nvmf_init_if2"
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:31:29.717  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:31:29.717  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:31:29.717   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:31:29.976  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:31:29.976  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms
00:31:29.976  
00:31:29.976  --- 10.0.0.3 ping statistics ---
00:31:29.976  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:31:29.976  rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:31:29.976  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:31:29.976  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms
00:31:29.976  
00:31:29.976  --- 10.0.0.4 ping statistics ---
00:31:29.976  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:31:29.976  rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:31:29.976  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:31:29.976  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms
00:31:29.976  
00:31:29.976  --- 10.0.0.1 ping statistics ---
00:31:29.976  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:31:29.976  rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:31:29.976  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:31:29.976  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms
00:31:29.976  
00:31:29.976  --- 10.0.0.2 ping statistics ---
00:31:29.976  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:31:29.976  rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=120303
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 120303
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 120303 ']'
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100
00:31:29.976  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable
00:31:29.976   19:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:31:29.976  [2024-12-13 19:16:01.752340] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:31:29.976  [2024-12-13 19:16:01.753294] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:31:29.976  [2024-12-13 19:16:01.753363] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:31:30.235  [2024-12-13 19:16:01.898137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:31:30.235  [2024-12-13 19:16:01.933679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:31:30.235  [2024-12-13 19:16:01.933800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:31:30.235  [2024-12-13 19:16:01.933829] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:31:30.235  [2024-12-13 19:16:01.933842] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:31:30.235  [2024-12-13 19:16:01.933850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:31:30.235  [2024-12-13 19:16:01.935059] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:31:30.235  [2024-12-13 19:16:01.935161] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:31:30.235  [2024-12-13 19:16:01.935184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:31:30.235  [2024-12-13 19:16:02.029163] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:31:30.235  [2024-12-13 19:16:02.029620] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:31:30.235  [2024-12-13 19:16:02.029662] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:31:30.235  [2024-12-13 19:16:02.030555] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:31:30.493   19:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:30.493   19:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0
00:31:30.493   19:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:31:30.493   19:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable
00:31:30.493   19:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:31:30.493   19:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:31:30.493   19:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000
00:31:30.493   19:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:31:30.751  [2024-12-13 19:16:02.400319] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:31:30.751   19:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:31:31.009   19:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:31:31.268  [2024-12-13 19:16:02.912723] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:31:31.268   19:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420
00:31:31.526   19:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0
00:31:31.784  Malloc0
00:31:31.784   19:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:31:32.043  Delay0
00:31:32.043   19:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:32.043   19:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512
00:31:32.302  NULL1
00:31:32.302   19:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1
00:31:32.560   19:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000
00:31:32.560   19:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=120421
00:31:32.560   19:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:32.560   19:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:33.936  Read completed with error (sct=0, sc=11)
00:31:33.936   19:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:33.936  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:33.936  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:33.936  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:33.936  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:33.936  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:33.936  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:33.936  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:34.195  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:34.195  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:34.195  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:34.195   19:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001
00:31:34.195   19:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001
00:31:34.463  true
00:31:34.463   19:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:34.463   19:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:35.401  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:35.401   19:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:35.401  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:35.401  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:35.401  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:35.401  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:35.401  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:35.401  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:35.401  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:35.401  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:35.401   19:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002
00:31:35.401   19:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002
00:31:35.660  true
00:31:35.660   19:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:35.660   19:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:36.597  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:36.597   19:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:36.597  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:36.597  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:36.597  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:36.597  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:36.597  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:36.856  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:36.856   19:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003
00:31:36.856   19:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003
00:31:37.128  true
00:31:37.128   19:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:37.128   19:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:37.715   19:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:37.715  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:37.973   19:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004
00:31:37.973   19:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004
00:31:38.232  true
00:31:38.232   19:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:38.232   19:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:38.491   19:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:38.749   19:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005
00:31:38.749   19:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005
00:31:39.009  true
00:31:39.009   19:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:39.009   19:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:39.945   19:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:40.203   19:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006
00:31:40.203   19:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006
00:31:40.203  true
00:31:40.203   19:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:40.203   19:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:40.462   19:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:40.720   19:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007
00:31:40.720   19:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007
00:31:40.979  true
00:31:40.979   19:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:40.979   19:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:41.914   19:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:42.173   19:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008
00:31:42.173   19:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008
00:31:42.431  true
00:31:42.431   19:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:42.431   19:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:42.690   19:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:42.948   19:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009
00:31:42.948   19:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009
00:31:43.207  true
00:31:43.207   19:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:43.207   19:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:43.469   19:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:43.727   19:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010
00:31:43.727   19:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010
00:31:43.727  true
00:31:43.727   19:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:43.727   19:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:45.103   19:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:45.103   19:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011
00:31:45.103   19:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011
00:31:45.361  true
00:31:45.361   19:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:45.361   19:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:45.620   19:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:45.879   19:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012
00:31:45.879   19:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012
00:31:46.137  true
00:31:46.137   19:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:46.137   19:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:46.395   19:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:46.653   19:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013
00:31:46.653   19:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013
00:31:46.911  true
00:31:46.911   19:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:46.911   19:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:47.846   19:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:48.104   19:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014
00:31:48.105   19:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014
00:31:48.363  true
00:31:48.363   19:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:48.363   19:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:48.621   19:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:48.880   19:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015
00:31:48.880   19:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015
00:31:49.138  true
00:31:49.138   19:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:49.138   19:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:50.073   19:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:50.073   19:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016
00:31:50.073   19:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016
00:31:50.332  true
00:31:50.332   19:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:50.332   19:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:50.590   19:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:50.849   19:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017
00:31:50.849   19:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017
00:31:51.107  true
00:31:51.107   19:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:51.107   19:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:51.365   19:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:51.623   19:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018
00:31:51.623   19:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018
00:31:51.881  true
00:31:51.881   19:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:51.881   19:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:52.815   19:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:53.381   19:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019
00:31:53.381   19:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019
00:31:53.381  true
00:31:53.381   19:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:53.381   19:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:53.639   19:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:53.897   19:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020
00:31:53.897   19:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020
00:31:54.463  true
00:31:54.463   19:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:54.463   19:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:54.721   19:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:54.979   19:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021
00:31:54.979   19:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021
00:31:55.238  true
00:31:55.238   19:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:55.238   19:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:56.173   19:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:56.173   19:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022
00:31:56.173   19:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022
00:31:56.431  true
00:31:56.431   19:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:56.431   19:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:56.690   19:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:57.257   19:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023
00:31:57.257   19:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023
00:31:57.257  true
00:31:57.257   19:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:57.257   19:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:57.824   19:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:57.824   19:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024
00:31:57.824   19:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024
00:31:58.082  true
00:31:58.082   19:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:58.082   19:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:59.017  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:31:59.017   19:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:31:59.275   19:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025
00:31:59.275   19:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025
00:31:59.534  true
00:31:59.534   19:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:31:59.534   19:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:31:59.793   19:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:32:00.051   19:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026
00:32:00.051   19:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026
00:32:00.310  true
00:32:00.310   19:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:32:00.310   19:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:32:00.569   19:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:32:00.827   19:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027
00:32:00.827   19:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027
00:32:00.827  true
00:32:00.827   19:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:32:00.827   19:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:32:02.203   19:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:32:02.203  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:02.203  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:02.203  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:02.203  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:02.203   19:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028
00:32:02.203   19:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028
00:32:02.462  true
00:32:02.462   19:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:32:02.462   19:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:32:03.396  Initializing NVMe Controllers
00:32:03.396  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:32:03.396  Controller IO queue size 128, less than required.
00:32:03.396  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:32:03.396  Controller IO queue size 128, less than required.
00:32:03.396  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:32:03.396  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:32:03.396  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:32:03.396  Initialization complete. Launching workers.
00:32:03.396  ========================================================
00:32:03.396                                                                                                               Latency(us)
00:32:03.396  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:32:03.396  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    1139.70       0.56   59543.97    2541.33 1020783.42
00:32:03.396  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:   11981.23       5.85   10683.16    3284.85  504949.68
00:32:03.396  ========================================================
00:32:03.396  Total                                                                    :   13120.93       6.41   14927.26    2541.33 1020783.42
00:32:03.396  
00:32:03.396   19:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:32:03.396   19:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029
00:32:03.396   19:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029
00:32:03.654  true
00:32:03.911   19:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120421
00:32:03.911  /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (120421) - No such process
00:32:03.911   19:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 120421
00:32:03.911   19:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:32:03.911   19:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:32:04.170   19:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8
00:32:04.170   19:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=()
00:32:04.170   19:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 ))
00:32:04.170   19:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:32:04.170   19:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096
00:32:04.428  null0
00:32:04.428   19:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:32:04.428   19:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:32:04.428   19:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096
00:32:04.731  null1
00:32:04.731   19:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:32:04.731   19:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:32:04.731   19:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096
00:32:05.010  null2
00:32:05.010   19:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:32:05.010   19:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:32:05.010   19:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096
00:32:05.010  null3
00:32:05.010   19:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:32:05.010   19:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:32:05.010   19:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096
00:32:05.268  null4
00:32:05.268   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:32:05.268   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:32:05.268   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096
00:32:05.527  null5
00:32:05.527   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:32:05.528   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:32:05.528   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096
00:32:05.786  null6
00:32:05.786   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:32:05.786   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:32:05.786   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096
00:32:06.046  null7
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:32:06.046   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 121417 121418 121421 121423 121424 121427 121428 121431
00:32:06.047   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7
00:32:06.047   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7
00:32:06.047   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:32:06.047   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:06.047   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:32:06.304   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:32:06.304   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:32:06.304   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:32:06.304   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:32:06.304   19:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:32:06.304   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:32:06.304   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:32:06.305   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:32:06.562   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:06.562   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:06.562   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:32:06.562   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:06.562   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:06.562   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:32:06.563   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:06.563   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:06.563   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:32:06.563   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:06.563   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:06.563   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:32:06.563   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:06.563   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:06.563   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:32:06.821   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:06.821   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:06.821   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:32:06.821   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:06.821   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:06.821   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:32:06.821   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:06.821   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:06.821   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:32:06.821   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:32:06.821   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:32:06.821   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:32:06.821   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:32:07.080   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:32:07.080   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:32:07.080   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:32:07.080   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:32:07.080   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:07.080   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:07.080   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:32:07.080   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:07.080   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:07.080   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:32:07.080   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:07.080   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:07.080   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:32:07.339   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:07.339   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:07.339   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:32:07.339   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:07.339   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:07.339   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:32:07.339   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:07.339   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:07.339   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:32:07.339   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:07.339   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:07.339   19:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:32:07.339   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:07.339   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:07.339   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:32:07.339   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:32:07.339   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:32:07.598   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:32:07.598   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:32:07.598   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:32:07.598   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:32:07.598   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:32:07.598   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:07.598   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:07.598   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:32:07.857   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:07.857   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:07.857   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:32:07.857   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:32:07.857   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:07.857   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:07.857   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:32:07.857   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:07.857   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:07.857   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:32:07.857   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:07.857   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:07.857   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:32:07.857   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:07.857   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:07.857   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:32:07.857   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:07.857   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:07.857   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:32:08.115   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:32:08.116   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:08.116   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:08.116   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:32:08.116   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:32:08.116   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:32:08.116   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:32:08.116   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:32:08.116   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:32:08.374   19:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:32:08.374   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:32:08.374   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:08.374   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:08.374   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:32:08.374   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:08.374   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:08.374   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:32:08.374   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:08.374   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:08.374   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:32:08.374   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:08.374   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:08.374   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:32:08.645   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:08.645   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:08.645   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:32:08.645   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:08.645   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:08.645   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:32:08.645   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:08.645   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:08.646   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:32:08.646   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:32:08.646   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:08.646   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:08.646   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:32:08.646   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:32:08.646   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:32:08.646   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:32:08.904   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:32:08.904   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:32:08.904   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:32:08.904   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:08.904   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:08.904   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:32:08.904   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:08.904   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:08.904   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:32:08.904   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:32:08.904   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:08.904   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:08.904   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:32:09.163   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:09.163   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:09.163   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:32:09.163   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:09.163   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:09.163   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:32:09.163   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:09.163   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:09.163   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:32:09.163   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:09.163   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:09.163   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:32:09.163   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:32:09.422   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:32:09.422   19:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:32:09.422   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:09.422   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:09.422   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:32:09.422   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:32:09.422   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:32:09.422   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:32:09.680   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:32:09.680   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:09.680   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:09.680   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:32:09.680   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:09.680   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:09.680   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:32:09.680   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:09.680   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:09.680   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:32:09.680   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:32:09.680   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:09.680   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:09.680   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:32:09.680   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:09.680   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:09.680   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:32:09.680   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:09.680   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:09.680   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:32:09.939   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:09.939   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:09.939   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:32:09.939   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:32:09.939   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:32:09.939   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:32:09.939   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:32:10.198   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:10.198   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:10.198   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:32:10.198   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:32:10.198   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:32:10.198   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:32:10.198   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:10.198   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:10.198   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:32:10.198   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:10.198   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:10.198   19:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:32:10.457   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:10.457   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:10.457   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:32:10.457   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:10.457   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:10.457   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:32:10.457   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:10.457   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:10.457   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:32:10.457   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:32:10.457   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:10.457   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:10.457   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:32:10.457   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:10.457   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:10.457   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:32:10.457   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:32:10.457   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:32:10.716   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:32:10.716   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:32:10.716   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:32:10.716   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:10.716   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:10.716   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:32:10.716   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:32:10.716   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:32:10.975   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:10.975   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:10.975   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:32:10.976   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:10.976   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:10.976   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:32:10.976   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:10.976   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:10.976   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:32:10.976   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:10.976   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:10.976   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:32:10.976   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:10.976   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:10.976   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:32:10.976   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:32:11.235   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:11.235   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:11.235   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:32:11.235   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:11.235   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:11.235   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:32:11.235   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:32:11.235   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:32:11.235   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:32:11.235   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:32:11.235   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:11.235   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:11.235   19:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:32:11.494   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:32:11.494   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:32:11.494   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:32:11.494   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:11.494   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:11.494   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:32:11.494   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:11.494   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:11.494   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:32:11.494   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:32:11.494   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:11.494   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:11.494   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:32:11.752   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:11.752   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:11.752   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:32:11.752   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:11.752   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:11.752   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:32:11.752   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:11.752   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:11.752   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:32:11.752   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:11.752   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:11.752   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:32:11.752   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:32:11.752   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:32:11.752   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:11.752   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:11.752   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:32:12.011   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:32:12.011   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:32:12.011   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:32:12.011   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:32:12.011   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:32:12.011   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:32:12.011   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:12.011   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:12.011   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:12.011   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:12.269   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:12.269   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:12.269   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:12.269   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:12.269   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:12.269   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:12.269   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:12.269   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:12.269   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:12.269   19:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:12.269   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:32:12.269   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:32:12.269   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT
00:32:12.269   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini
00:32:12.269   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup
00:32:12.269   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync
00:32:12.528   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:32:12.528   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e
00:32:12.528   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20}
00:32:12.528   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:32:12.528  rmmod nvme_tcp
00:32:12.528  rmmod nvme_fabrics
00:32:12.528  rmmod nvme_keyring
00:32:12.528   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:32:12.528   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e
00:32:12.528   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0
00:32:12.528   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 120303 ']'
00:32:12.528   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 120303
00:32:12.529   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 120303 ']'
00:32:12.529   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 120303
00:32:12.529    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname
00:32:12.529   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:32:12.529    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 120303
00:32:12.529   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:32:12.529   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:32:12.529   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 120303'
00:32:12.529  killing process with pid 120303
00:32:12.529   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 120303
00:32:12.529   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 120303
00:32:12.788   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:32:12.788   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:32:12.788   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:32:12.788   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr
00:32:12.788   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save
00:32:12.788   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:32:12.788   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore
00:32:12.788   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:32:12.788   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:32:12.788   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:32:12.788   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:32:12.788   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:32:12.788   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:32:12.788   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:32:12.788   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:32:12.788   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:32:12.788   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:32:12.788   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:32:13.047   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:32:13.047   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:32:13.047   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:32:13.047   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:32:13.047   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns
00:32:13.047   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:32:13.047   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:32:13.047    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:32:13.047   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0
00:32:13.047  
00:32:13.047  real	0m43.636s
00:32:13.047  user	3m18.924s
00:32:13.047  sys	0m16.640s
00:32:13.047   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable
00:32:13.047   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:32:13.047  ************************************
00:32:13.047  END TEST nvmf_ns_hotplug_stress
00:32:13.047  ************************************
00:32:13.047   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode
00:32:13.047   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:32:13.047   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:32:13.047   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:32:13.047  ************************************
00:32:13.047  START TEST nvmf_delete_subsystem
00:32:13.047  ************************************
00:32:13.047   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode
00:32:13.309  * Looking for test storage...
00:32:13.309  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:32:13.309     19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version
00:32:13.309     19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-:
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-:
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<'
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 ))
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:32:13.309     19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1
00:32:13.309     19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1
00:32:13.309     19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:32:13.309     19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1
00:32:13.309     19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2
00:32:13.309     19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2
00:32:13.309     19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:32:13.309     19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:32:13.309  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:13.309  		--rc genhtml_branch_coverage=1
00:32:13.309  		--rc genhtml_function_coverage=1
00:32:13.309  		--rc genhtml_legend=1
00:32:13.309  		--rc geninfo_all_blocks=1
00:32:13.309  		--rc geninfo_unexecuted_blocks=1
00:32:13.309  		
00:32:13.309  		'
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:32:13.309  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:13.309  		--rc genhtml_branch_coverage=1
00:32:13.309  		--rc genhtml_function_coverage=1
00:32:13.309  		--rc genhtml_legend=1
00:32:13.309  		--rc geninfo_all_blocks=1
00:32:13.309  		--rc geninfo_unexecuted_blocks=1
00:32:13.309  		
00:32:13.309  		'
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:32:13.309  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:13.309  		--rc genhtml_branch_coverage=1
00:32:13.309  		--rc genhtml_function_coverage=1
00:32:13.309  		--rc genhtml_legend=1
00:32:13.309  		--rc geninfo_all_blocks=1
00:32:13.309  		--rc geninfo_unexecuted_blocks=1
00:32:13.309  		
00:32:13.309  		'
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:32:13.309  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:13.309  		--rc genhtml_branch_coverage=1
00:32:13.309  		--rc genhtml_function_coverage=1
00:32:13.309  		--rc genhtml_legend=1
00:32:13.309  		--rc geninfo_all_blocks=1
00:32:13.309  		--rc geninfo_unexecuted_blocks=1
00:32:13.309  		
00:32:13.309  		'
00:32:13.309   19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:32:13.309     19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:32:13.309     19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:32:13.309    19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:32:13.309     19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob
00:32:13.309     19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:32:13.309     19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:32:13.310     19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:32:13.310      19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:13.310      19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:13.310      19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:13.310      19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH
00:32:13.310      19:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:13.310    19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0
00:32:13.310    19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:32:13.310    19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:32:13.310    19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:32:13.310    19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:32:13.310    19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:32:13.310    19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:32:13.310    19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:32:13.310    19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:32:13.310    19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:32:13.310    19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:32:13.310    19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:32:13.310  Cannot find device "nvmf_init_br"
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:32:13.310  Cannot find device "nvmf_init_br2"
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:32:13.310  Cannot find device "nvmf_tgt_br"
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:32:13.310  Cannot find device "nvmf_tgt_br2"
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:32:13.310  Cannot find device "nvmf_init_br"
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:32:13.310  Cannot find device "nvmf_init_br2"
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:32:13.310  Cannot find device "nvmf_tgt_br"
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:32:13.310  Cannot find device "nvmf_tgt_br2"
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:32:13.310  Cannot find device "nvmf_br"
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true
00:32:13.310   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:32:13.576  Cannot find device "nvmf_init_if"
00:32:13.576   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true
00:32:13.576   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:32:13.576  Cannot find device "nvmf_init_if2"
00:32:13.576   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true
00:32:13.576   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:32:13.576  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:32:13.576   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true
00:32:13.576   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:32:13.576  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:32:13.576   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true
00:32:13.576   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:32:13.576   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:32:13.576   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:32:13.577  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:32:13.577  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms
00:32:13.577  
00:32:13.577  --- 10.0.0.3 ping statistics ---
00:32:13.577  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:13.577  rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:32:13.577  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:32:13.577  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms
00:32:13.577  
00:32:13.577  --- 10.0.0.4 ping statistics ---
00:32:13.577  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:13.577  rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:32:13.577  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:32:13.577  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms
00:32:13.577  
00:32:13.577  --- 10.0.0.1 ping statistics ---
00:32:13.577  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:13.577  rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:32:13.577  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:32:13.577  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms
00:32:13.577  
00:32:13.577  --- 10.0.0.2 ping statistics ---
00:32:13.577  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:13.577  rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable
00:32:13.577   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:32:13.836   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=122810
00:32:13.836   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 122810
00:32:13.836   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3
00:32:13.836   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 122810 ']'
00:32:13.836   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:32:13.836   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100
00:32:13.836  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:32:13.836   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:32:13.836   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable
00:32:13.836   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:32:13.836  [2024-12-13 19:16:45.475992] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:32:13.836  [2024-12-13 19:16:45.477354] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:32:13.836  [2024-12-13 19:16:45.477437] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:32:13.836  [2024-12-13 19:16:45.622691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:32:13.836  [2024-12-13 19:16:45.652249] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:32:13.836  [2024-12-13 19:16:45.652323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:32:13.836  [2024-12-13 19:16:45.652332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:32:13.836  [2024-12-13 19:16:45.652340] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:32:13.836  [2024-12-13 19:16:45.652346] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:32:13.836  [2024-12-13 19:16:45.656255] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:32:14.095  [2024-12-13 19:16:45.656281] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:32:14.095  [2024-12-13 19:16:45.746031] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:32:14.095  [2024-12-13 19:16:45.746778] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:32:14.095  [2024-12-13 19:16:45.746914] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:32:14.095  [2024-12-13 19:16:45.829072] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:32:14.095  [2024-12-13 19:16:45.853475] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:32:14.095  NULL1
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:32:14.095  Delay0
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=122843
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2
00:32:14.095   19:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4
00:32:14.354  [2024-12-13 19:16:46.060971] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:32:16.257   19:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:32:16.257   19:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:16.257   19:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  starting I/O failed: -6
00:32:16.516  Write completed with error (sct=0, sc=8)
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  starting I/O failed: -6
00:32:16.516  Write completed with error (sct=0, sc=8)
00:32:16.516  Write completed with error (sct=0, sc=8)
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  starting I/O failed: -6
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  Write completed with error (sct=0, sc=8)
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  Write completed with error (sct=0, sc=8)
00:32:16.516  starting I/O failed: -6
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  Write completed with error (sct=0, sc=8)
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  starting I/O failed: -6
00:32:16.516  Write completed with error (sct=0, sc=8)
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  Write completed with error (sct=0, sc=8)
00:32:16.516  starting I/O failed: -6
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  Write completed with error (sct=0, sc=8)
00:32:16.516  starting I/O failed: -6
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  Write completed with error (sct=0, sc=8)
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  starting I/O failed: -6
00:32:16.516  Write completed with error (sct=0, sc=8)
00:32:16.516  Write completed with error (sct=0, sc=8)
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  Write completed with error (sct=0, sc=8)
00:32:16.516  starting I/O failed: -6
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  Read completed with error (sct=0, sc=8)
00:32:16.516  starting I/O failed: -6
00:32:16.516  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  [2024-12-13 19:16:48.102759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb8470 is same with the state(6) to be set
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  [2024-12-13 19:16:48.103501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb62c0 is same with the state(6) to be set
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  Read completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  Write completed with error (sct=0, sc=8)
00:32:16.517  starting I/O failed: -6
00:32:16.517  starting I/O failed: -6
00:32:16.517  starting I/O failed: -6
00:32:16.517  starting I/O failed: -6
00:32:16.517  starting I/O failed: -6
00:32:16.517  starting I/O failed: -6
00:32:16.517  starting I/O failed: -6
00:32:16.517  starting I/O failed: -6
00:32:17.454  [2024-12-13 19:16:49.076650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd3ac0 is same with the state(6) to be set
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  [2024-12-13 19:16:49.100354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb64a0 is same with the state(6) to be set
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  [2024-12-13 19:16:49.102854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1e8800d060 is same with the state(6) to be set
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.454  Write completed with error (sct=0, sc=8)
00:32:17.454  Read completed with error (sct=0, sc=8)
00:32:17.455  Write completed with error (sct=0, sc=8)
00:32:17.455  Write completed with error (sct=0, sc=8)
00:32:17.455  Write completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Write completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Write completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Write completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Write completed with error (sct=0, sc=8)
00:32:17.455  Write completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  [2024-12-13 19:16:49.103567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1e8800d6c0 is same with the state(6) to be set
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Write completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Write completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Write completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Write completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Write completed with error (sct=0, sc=8)
00:32:17.455  Write completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Write completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  Read completed with error (sct=0, sc=8)
00:32:17.455  [2024-12-13 19:16:49.104540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb87a0 is same with the state(6) to be set
00:32:17.455  Initializing NVMe Controllers
00:32:17.455  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:32:17.455  Controller IO queue size 128, less than required.
00:32:17.455  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:32:17.455  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2
00:32:17.455  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3
00:32:17.455  Initialization complete. Launching workers.
00:32:17.455  ========================================================
00:32:17.455                                                                                                               Latency(us)
00:32:17.455  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:32:17.455  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:     169.71       0.08  896991.21     754.48 1017522.48
00:32:17.455  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:     185.54       0.09  908719.01    1723.73 1017575.15
00:32:17.455  ========================================================
00:32:17.455  Total                                                                    :     355.24       0.17  903116.46     754.48 1017575.15
00:32:17.455  
00:32:17.455  [2024-12-13 19:16:49.105664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd3ac0 (9): Bad file descriptor
00:32:17.455  /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred
00:32:17.455   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:17.455   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0
00:32:17.455   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 122843
00:32:17.455   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 ))
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 122843
00:32:18.023  /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (122843) - No such process
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 122843
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 122843
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:32:18.023    19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 122843
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:32:18.023  [2024-12-13 19:16:49.629338] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=122888
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122888
00:32:18.023   19:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:32:18.023  [2024-12-13 19:16:49.797447] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:32:18.591   19:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:32:18.591   19:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122888
00:32:18.591   19:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:32:18.849   19:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:32:18.849   19:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122888
00:32:18.849   19:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:32:19.417   19:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:32:19.417   19:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122888
00:32:19.417   19:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:32:19.984   19:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:32:19.984   19:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122888
00:32:19.984   19:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:32:20.551   19:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:32:20.551   19:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122888
00:32:20.551   19:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:32:21.120   19:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:32:21.120   19:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122888
00:32:21.120   19:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:32:21.120  Initializing NVMe Controllers
00:32:21.120  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:32:21.120  Controller IO queue size 128, less than required.
00:32:21.120  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:32:21.120  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2
00:32:21.120  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3
00:32:21.120  Initialization complete. Launching workers.
00:32:21.120  ========================================================
00:32:21.120                                                                                                               Latency(us)
00:32:21.120  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:32:21.120  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:     128.00       0.06 1004520.43 1000171.96 1041697.66
00:32:21.120  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:     128.00       0.06 1007032.30 1000260.85 1018458.47
00:32:21.120  ========================================================
00:32:21.120  Total                                                                    :     256.00       0.12 1005776.36 1000171.96 1041697.66
00:32:21.120  
00:32:21.380   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:32:21.380   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122888
00:32:21.380  /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (122888) - No such process
00:32:21.380   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 122888
00:32:21.380   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:32:21.380   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini
00:32:21.380   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup
00:32:21.380   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync
00:32:21.639   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:32:21.639   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e
00:32:21.639   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20}
00:32:21.639   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:32:21.639  rmmod nvme_tcp
00:32:21.639  rmmod nvme_fabrics
00:32:21.639  rmmod nvme_keyring
00:32:21.639   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:32:21.639   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e
00:32:21.639   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0
00:32:21.639   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 122810 ']'
00:32:21.639   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 122810
00:32:21.639   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 122810 ']'
00:32:21.639   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 122810
00:32:21.639    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname
00:32:21.639   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:32:21.639    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122810
00:32:21.639  killing process with pid 122810
00:32:21.639   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:32:21.639   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:32:21.639   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122810'
00:32:21.639   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 122810
00:32:21.639   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 122810
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:32:21.899   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:32:21.899    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:32:22.159  ************************************
00:32:22.159  END TEST nvmf_delete_subsystem
00:32:22.159  ************************************
00:32:22.159   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0
00:32:22.159  
00:32:22.159  real	0m8.912s
00:32:22.159  user	0m24.760s
00:32:22.159  sys	0m1.759s
00:32:22.159   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable
00:32:22.159   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:32:22.159   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode
00:32:22.159   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:32:22.159   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:32:22.159   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:32:22.159  ************************************
00:32:22.159  START TEST nvmf_host_management
00:32:22.159  ************************************
00:32:22.159   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode
00:32:22.159  * Looking for test storage...
00:32:22.159  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:32:22.159    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:32:22.159     19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version
00:32:22.159     19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:32:22.159    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:32:22.159    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:32:22.159    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l
00:32:22.159    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l
00:32:22.159    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-:
00:32:22.159    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1
00:32:22.159    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-:
00:32:22.159    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2
00:32:22.159    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<'
00:32:22.159    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2
00:32:22.159    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1
00:32:22.159    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:32:22.159    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 ))
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:32:22.160     19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1
00:32:22.160     19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1
00:32:22.160     19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:32:22.160     19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1
00:32:22.160     19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2
00:32:22.160     19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2
00:32:22.160     19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:32:22.160     19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:32:22.160  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:22.160  		--rc genhtml_branch_coverage=1
00:32:22.160  		--rc genhtml_function_coverage=1
00:32:22.160  		--rc genhtml_legend=1
00:32:22.160  		--rc geninfo_all_blocks=1
00:32:22.160  		--rc geninfo_unexecuted_blocks=1
00:32:22.160  		
00:32:22.160  		'
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:32:22.160  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:22.160  		--rc genhtml_branch_coverage=1
00:32:22.160  		--rc genhtml_function_coverage=1
00:32:22.160  		--rc genhtml_legend=1
00:32:22.160  		--rc geninfo_all_blocks=1
00:32:22.160  		--rc geninfo_unexecuted_blocks=1
00:32:22.160  		
00:32:22.160  		'
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:32:22.160  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:22.160  		--rc genhtml_branch_coverage=1
00:32:22.160  		--rc genhtml_function_coverage=1
00:32:22.160  		--rc genhtml_legend=1
00:32:22.160  		--rc geninfo_all_blocks=1
00:32:22.160  		--rc geninfo_unexecuted_blocks=1
00:32:22.160  		
00:32:22.160  		'
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:32:22.160  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:22.160  		--rc genhtml_branch_coverage=1
00:32:22.160  		--rc genhtml_function_coverage=1
00:32:22.160  		--rc genhtml_legend=1
00:32:22.160  		--rc geninfo_all_blocks=1
00:32:22.160  		--rc geninfo_unexecuted_blocks=1
00:32:22.160  		
00:32:22.160  		'
00:32:22.160   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:32:22.160     19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:32:22.160     19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:32:22.160     19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob
00:32:22.160     19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:32:22.160     19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:32:22.160     19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:32:22.160      19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:22.160      19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:22.160      19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:22.160      19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH
00:32:22.160      19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:32:22.160    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0
00:32:22.160   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64
00:32:22.160   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:32:22.161    19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:32:22.161   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:32:22.420  Cannot find device "nvmf_init_br"
00:32:22.420   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # true
00:32:22.420   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:32:22.420  Cannot find device "nvmf_init_br2"
00:32:22.420   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # true
00:32:22.420   19:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:32:22.420  Cannot find device "nvmf_tgt_br"
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # true
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:32:22.421  Cannot find device "nvmf_tgt_br2"
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # true
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:32:22.421  Cannot find device "nvmf_init_br"
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # true
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:32:22.421  Cannot find device "nvmf_init_br2"
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # true
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:32:22.421  Cannot find device "nvmf_tgt_br"
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # true
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:32:22.421  Cannot find device "nvmf_tgt_br2"
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # true
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:32:22.421  Cannot find device "nvmf_br"
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # true
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:32:22.421  Cannot find device "nvmf_init_if"
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # true
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:32:22.421  Cannot find device "nvmf_init_if2"
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # true
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:32:22.421  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # true
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:32:22.421  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # true
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:32:22.421   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:32:22.681  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:32:22.681  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms
00:32:22.681  
00:32:22.681  --- 10.0.0.3 ping statistics ---
00:32:22.681  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:22.681  rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:32:22.681  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:32:22.681  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.094 ms
00:32:22.681  
00:32:22.681  --- 10.0.0.4 ping statistics ---
00:32:22.681  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:22.681  rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:32:22.681  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:32:22.681  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms
00:32:22.681  
00:32:22.681  --- 10.0.0.1 ping statistics ---
00:32:22.681  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:22.681  rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:32:22.681  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:32:22.681  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms
00:32:22.681  
00:32:22.681  --- 10.0.0.2 ping statistics ---
00:32:22.681  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:22.681  rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@461 -- # return 0
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=123178
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 123178
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 123178 ']'
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:32:22.681  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable
00:32:22.681   19:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:32:22.681  [2024-12-13 19:16:54.457985] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:32:22.681  [2024-12-13 19:16:54.459298] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:32:22.681  [2024-12-13 19:16:54.459367] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:32:22.941  [2024-12-13 19:16:54.613894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:32:22.941  [2024-12-13 19:16:54.668477] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:32:22.941  [2024-12-13 19:16:54.668526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:32:22.941  [2024-12-13 19:16:54.668548] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:32:22.941  [2024-12-13 19:16:54.668564] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:32:22.941  [2024-12-13 19:16:54.668592] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:32:22.941  [2024-12-13 19:16:54.669998] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:32:22.941  [2024-12-13 19:16:54.672279] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:32:22.941  [2024-12-13 19:16:54.672368] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4
00:32:22.941  [2024-12-13 19:16:54.672380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:32:23.199  [2024-12-13 19:16:54.764120] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:32:23.199  [2024-12-13 19:16:54.764213] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:32:23.199  [2024-12-13 19:16:54.764421] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:32:23.199  [2024-12-13 19:16:54.764622] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:32:23.199  [2024-12-13 19:16:54.764910] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode.
00:32:23.766   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:32:23.766   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0
00:32:23.766   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:32:23.766   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable
00:32:23.766   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:32:23.766   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:32:23.766   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:32:23.766   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:23.766   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:32:23.766  [2024-12-13 19:16:55.549574] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:32:23.766   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:23.766   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem
00:32:23.766   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable
00:32:23.766   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:32:23.766   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt
00:32:23.766   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat
00:32:24.026   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd
00:32:24.026   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:24.026   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:32:24.026  Malloc0
00:32:24.026  [2024-12-13 19:16:55.641606] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:32:24.026   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:24.026   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems
00:32:24.026   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable
00:32:24.026   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:32:24.026   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=123250
00:32:24.026   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 123250 /var/tmp/bdevperf.sock
00:32:24.026   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 123250 ']'
00:32:24.026   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:32:24.026    19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0
00:32:24.026   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10
00:32:24.026   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100
00:32:24.026    19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=()
00:32:24.026   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:32:24.026  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:32:24.026    19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config
00:32:24.026   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable
00:32:24.026    19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:32:24.026    19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:32:24.026  {
00:32:24.026    "params": {
00:32:24.026      "name": "Nvme$subsystem",
00:32:24.026      "trtype": "$TEST_TRANSPORT",
00:32:24.026      "traddr": "$NVMF_FIRST_TARGET_IP",
00:32:24.026      "adrfam": "ipv4",
00:32:24.026      "trsvcid": "$NVMF_PORT",
00:32:24.026      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:32:24.026      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:32:24.026      "hdgst": ${hdgst:-false},
00:32:24.026      "ddgst": ${ddgst:-false}
00:32:24.026    },
00:32:24.026    "method": "bdev_nvme_attach_controller"
00:32:24.026  }
00:32:24.026  EOF
00:32:24.026  )")
00:32:24.026   19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:32:24.026     19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat
00:32:24.026    19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq .
00:32:24.026     19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=,
00:32:24.026     19:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:32:24.026    "params": {
00:32:24.026      "name": "Nvme0",
00:32:24.026      "trtype": "tcp",
00:32:24.026      "traddr": "10.0.0.3",
00:32:24.026      "adrfam": "ipv4",
00:32:24.026      "trsvcid": "4420",
00:32:24.026      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:32:24.026      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:32:24.026      "hdgst": false,
00:32:24.026      "ddgst": false
00:32:24.026    },
00:32:24.026    "method": "bdev_nvme_attach_controller"
00:32:24.026  }'
00:32:24.026  [2024-12-13 19:16:55.756775] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:32:24.026  [2024-12-13 19:16:55.756866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123250 ]
00:32:24.285  [2024-12-13 19:16:55.914835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:32:24.285  [2024-12-13 19:16:55.951798] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:32:24.544  Running I/O for 10 seconds...
00:32:24.544   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:32:24.544   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0
00:32:24.544   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init
00:32:24.544   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:24.544   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:32:24.544   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:24.544   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:32:24.544   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1
00:32:24.544   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']'
00:32:24.544   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']'
00:32:24.544   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1
00:32:24.544   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i
00:32:24.544   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 ))
00:32:24.544   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 ))
00:32:24.544    19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1
00:32:24.544    19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops'
00:32:24.544    19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:24.544    19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:32:24.544    19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:24.544   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67
00:32:24.544   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']'
00:32:24.544   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25
00:32:24.805   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- ))
00:32:24.805   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 ))
00:32:24.805    19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1
00:32:24.805    19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:24.805    19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops'
00:32:24.805    19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:32:24.805    19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:24.805   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579
00:32:24.805   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']'
00:32:24.805   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0
00:32:24.805   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break
00:32:24.805   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0
00:32:24.805   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0
00:32:24.805   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:24.805   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:32:24.805  [2024-12-13 19:16:56.561389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713020 is same with the state(6) to be set
00:32:24.805  [2024-12-13 19:16:56.561443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713020 is same with the state(6) to be set
00:32:24.805  [2024-12-13 19:16:56.561454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713020 is same with the state(6) to be set
00:32:24.805   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:24.805   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0
00:32:24.805   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:24.805   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:32:24.805  [2024-12-13 19:16:56.569007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:32:24.806  [2024-12-13 19:16:56.569067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.569098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:32:24.806  [2024-12-13 19:16:56.569108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.569117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:32:24.806  [2024-12-13 19:16:56.569127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.569136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:32:24.806  [2024-12-13 19:16:56.569145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.569154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa862d0 is same with the state(6) to be set
00:32:24.806   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:24.806   19:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1
00:32:24.806  [2024-12-13 19:16:56.579827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa862d0 (9): Bad file descriptor
00:32:24.806  [2024-12-13 19:16:56.579956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.579973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.579990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.806  [2024-12-13 19:16:56.580601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.806  [2024-12-13 19:16:56.580612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.580621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.580632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.580641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.580652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.580661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.580672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.580681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.580692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.580701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.580712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.580721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.580732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.580741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.580752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.580761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.580772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.580781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.580794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.580803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.580813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.580822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.580833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.580842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.580853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.580863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.580874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.580883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.580893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.580902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.580913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.580923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.580933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.580943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.580953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.580963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.580973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.580982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.580993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.581002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.581014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.581023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.581034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.581043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.581054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.581063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.581074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.581083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.581095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.581104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.581115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.581124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.581134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.581143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.581154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.581163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.581174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.581183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.581194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.581204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.581214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.581234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.581246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.581256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.581267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.581276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.581287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:32:24.807  [2024-12-13 19:16:56.581296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:32:24.807  [2024-12-13 19:16:56.582513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:32:24.807  task offset: 90112 on job bdev=Nvme0n1 fails
00:32:24.807  
00:32:24.807                                                                                                  Latency(us)
00:32:24.807  
[2024-12-13T19:16:56.631Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:32:24.807  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:32:24.807  Job: Nvme0n1 ended in about 0.46 seconds with error
00:32:24.808  	 Verification LBA range: start 0x0 length 0x400
00:32:24.808  	 Nvme0n1             :       0.46    1543.56      96.47     140.32     0.00   36780.52    1861.82   36700.16
00:32:24.808  
[2024-12-13T19:16:56.632Z]  ===================================================================================================================
00:32:24.808  
[2024-12-13T19:16:56.632Z]  Total                       :               1543.56      96.47     140.32     0.00   36780.52    1861.82   36700.16
00:32:24.808  [2024-12-13 19:16:56.584462] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:32:24.808  [2024-12-13 19:16:56.587423] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful.
00:32:26.186   19:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 123250
00:32:26.186  /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (123250) - No such process
00:32:26.186   19:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true
00:32:26.186   19:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004
00:32:26.186    19:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0
00:32:26.186   19:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1
00:32:26.186    19:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=()
00:32:26.186    19:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config
00:32:26.186    19:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:32:26.186    19:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:32:26.186  {
00:32:26.186    "params": {
00:32:26.186      "name": "Nvme$subsystem",
00:32:26.186      "trtype": "$TEST_TRANSPORT",
00:32:26.186      "traddr": "$NVMF_FIRST_TARGET_IP",
00:32:26.186      "adrfam": "ipv4",
00:32:26.186      "trsvcid": "$NVMF_PORT",
00:32:26.186      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:32:26.186      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:32:26.186      "hdgst": ${hdgst:-false},
00:32:26.186      "ddgst": ${ddgst:-false}
00:32:26.186    },
00:32:26.186    "method": "bdev_nvme_attach_controller"
00:32:26.186  }
00:32:26.186  EOF
00:32:26.186  )")
00:32:26.186     19:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat
00:32:26.186    19:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq .
00:32:26.186     19:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=,
00:32:26.186     19:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:32:26.186    "params": {
00:32:26.186      "name": "Nvme0",
00:32:26.186      "trtype": "tcp",
00:32:26.186      "traddr": "10.0.0.3",
00:32:26.186      "adrfam": "ipv4",
00:32:26.186      "trsvcid": "4420",
00:32:26.186      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:32:26.186      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:32:26.186      "hdgst": false,
00:32:26.186      "ddgst": false
00:32:26.186    },
00:32:26.186    "method": "bdev_nvme_attach_controller"
00:32:26.186  }'
00:32:26.186  [2024-12-13 19:16:57.642786] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:32:26.186  [2024-12-13 19:16:57.642888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123295 ]
00:32:26.186  [2024-12-13 19:16:57.791899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:32:26.186  [2024-12-13 19:16:57.826939] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:32:26.186  Running I/O for 1 seconds...
00:32:27.565       1664.00 IOPS,   104.00 MiB/s
00:32:27.565                                                                                                  Latency(us)
00:32:27.565  
[2024-12-13T19:16:59.389Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:32:27.565  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:32:27.565  	 Verification LBA range: start 0x0 length 0x400
00:32:27.565  	 Nvme0n1             :       1.01    1704.12     106.51       0.00     0.00   36862.08    4974.78   33602.09
00:32:27.565  
[2024-12-13T19:16:59.389Z]  ===================================================================================================================
00:32:27.565  
[2024-12-13T19:16:59.389Z]  Total                       :               1704.12     106.51       0.00     0.00   36862.08    4974.78   33602.09
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20}
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:32:27.565  rmmod nvme_tcp
00:32:27.565  rmmod nvme_fabrics
00:32:27.565  rmmod nvme_keyring
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 123178 ']'
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 123178
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 123178 ']'
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 123178
00:32:27.565    19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:32:27.565    19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 123178
00:32:27.565  killing process with pid 123178
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 123178'
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 123178
00:32:27.565   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 123178
00:32:27.824  [2024-12-13 19:16:59.525447] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2
00:32:27.824   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:32:27.824   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:32:27.824   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:32:27.824   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr
00:32:27.824   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save
00:32:27.824   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:32:27.824   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore
00:32:27.824   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:32:27.824   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:32:27.824   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:32:27.824   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:32:27.824   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:32:27.824   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:32:27.824   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:32:27.824   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:32:27.824   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:32:27.824   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:32:27.825   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:32:28.096   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:32:28.096   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:32:28.096   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:32:28.096   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:32:28.096   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns
00:32:28.096   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:32:28.096   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:32:28.097    19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:32:28.097   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # return 0
00:32:28.097   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT
00:32:28.097  
00:32:28.097  real	0m6.025s
00:32:28.097  user	0m16.970s
00:32:28.097  sys	0m2.577s
00:32:28.097   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable
00:32:28.097   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:32:28.097  ************************************
00:32:28.097  END TEST nvmf_host_management
00:32:28.097  ************************************
00:32:28.097   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode
00:32:28.097   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:32:28.097   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:32:28.097   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:32:28.097  ************************************
00:32:28.097  START TEST nvmf_lvol
00:32:28.097  ************************************
00:32:28.097   19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode
00:32:28.371  * Looking for test storage...
00:32:28.371  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:32:28.371    19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:32:28.371     19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version
00:32:28.371     19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:32:28.371    19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:32:28.371    19:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-:
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-:
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<'
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 ))
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:32:28.371     19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1
00:32:28.371     19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1
00:32:28.371     19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:32:28.371     19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1
00:32:28.371     19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2
00:32:28.371     19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2
00:32:28.371     19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:32:28.371     19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:32:28.371  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:28.371  		--rc genhtml_branch_coverage=1
00:32:28.371  		--rc genhtml_function_coverage=1
00:32:28.371  		--rc genhtml_legend=1
00:32:28.371  		--rc geninfo_all_blocks=1
00:32:28.371  		--rc geninfo_unexecuted_blocks=1
00:32:28.371  		
00:32:28.371  		'
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:32:28.371  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:28.371  		--rc genhtml_branch_coverage=1
00:32:28.371  		--rc genhtml_function_coverage=1
00:32:28.371  		--rc genhtml_legend=1
00:32:28.371  		--rc geninfo_all_blocks=1
00:32:28.371  		--rc geninfo_unexecuted_blocks=1
00:32:28.371  		
00:32:28.371  		'
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:32:28.371  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:28.371  		--rc genhtml_branch_coverage=1
00:32:28.371  		--rc genhtml_function_coverage=1
00:32:28.371  		--rc genhtml_legend=1
00:32:28.371  		--rc geninfo_all_blocks=1
00:32:28.371  		--rc geninfo_unexecuted_blocks=1
00:32:28.371  		
00:32:28.371  		'
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:32:28.371  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:28.371  		--rc genhtml_branch_coverage=1
00:32:28.371  		--rc genhtml_function_coverage=1
00:32:28.371  		--rc genhtml_legend=1
00:32:28.371  		--rc geninfo_all_blocks=1
00:32:28.371  		--rc geninfo_unexecuted_blocks=1
00:32:28.371  		
00:32:28.371  		'
00:32:28.371   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:32:28.371     19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s
00:32:28.371    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:32:28.372     19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:32:28.372     19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob
00:32:28.372     19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:32:28.372     19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:32:28.372     19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:32:28.372      19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:28.372      19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:28.372      19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:28.372      19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH
00:32:28.372      19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:32:28.372    19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:32:28.372  Cannot find device "nvmf_init_br"
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # true
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:32:28.372  Cannot find device "nvmf_init_br2"
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # true
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:32:28.372  Cannot find device "nvmf_tgt_br"
00:32:28.372   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # true
00:32:28.373   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:32:28.373  Cannot find device "nvmf_tgt_br2"
00:32:28.373   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # true
00:32:28.373   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:32:28.373  Cannot find device "nvmf_init_br"
00:32:28.373   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # true
00:32:28.373   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:32:28.373  Cannot find device "nvmf_init_br2"
00:32:28.373   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # true
00:32:28.373   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:32:28.373  Cannot find device "nvmf_tgt_br"
00:32:28.373   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # true
00:32:28.373   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:32:28.373  Cannot find device "nvmf_tgt_br2"
00:32:28.373   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # true
00:32:28.373   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:32:28.373  Cannot find device "nvmf_br"
00:32:28.373   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # true
00:32:28.373   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:32:28.373  Cannot find device "nvmf_init_if"
00:32:28.373   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # true
00:32:28.373   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:32:28.373  Cannot find device "nvmf_init_if2"
00:32:28.373   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # true
00:32:28.373   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:32:28.373  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:32:28.373   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # true
00:32:28.373   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:32:28.373  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:32:28.373   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # true
00:32:28.373   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:32:28.633  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:32:28.633  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms
00:32:28.633  
00:32:28.633  --- 10.0.0.3 ping statistics ---
00:32:28.633  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:28.633  rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:32:28.633  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:32:28.633  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.031 ms
00:32:28.633  
00:32:28.633  --- 10.0.0.4 ping statistics ---
00:32:28.633  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:28.633  rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:32:28.633  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:32:28.633  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms
00:32:28.633  
00:32:28.633  --- 10.0.0.1 ping statistics ---
00:32:28.633  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:28.633  rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:32:28.633  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:32:28.633  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms
00:32:28.633  
00:32:28.633  --- 10.0.0.2 ping statistics ---
00:32:28.633  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:28.633  rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@461 -- # return 0
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=123557
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 123557
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 123557 ']'
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100
00:32:28.633  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable
00:32:28.633   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:32:28.891  [2024-12-13 19:17:00.517401] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:32:28.891  [2024-12-13 19:17:00.518730] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:32:28.892  [2024-12-13 19:17:00.518804] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:32:28.892  [2024-12-13 19:17:00.672321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:32:28.892  [2024-12-13 19:17:00.709108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:32:28.892  [2024-12-13 19:17:00.709179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:32:28.892  [2024-12-13 19:17:00.709204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:32:28.892  [2024-12-13 19:17:00.709216] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:32:28.892  [2024-12-13 19:17:00.709239] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:32:28.892  [2024-12-13 19:17:00.710430] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:32:28.892  [2024-12-13 19:17:00.710569] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:32:28.892  [2024-12-13 19:17:00.710579] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:32:29.150  [2024-12-13 19:17:00.806714] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:32:29.150  [2024-12-13 19:17:00.806957] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:32:29.150  [2024-12-13 19:17:00.807273] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:32:29.150  [2024-12-13 19:17:00.807360] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:32:29.150   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:32:29.150   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0
00:32:29.150   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:32:29.150   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable
00:32:29.150   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:32:29.150   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:32:29.150   19:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:32:29.409  [2024-12-13 19:17:01.175556] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:32:29.409    19:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:32:29.668   19:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 '
00:32:29.668    19:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:32:29.927   19:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1
00:32:29.927   19:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1'
00:32:30.186    19:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs
00:32:30.445   19:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=04f23610-b9c3-4a04-8516-c80e187c7714
00:32:30.445    19:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 04f23610-b9c3-4a04-8516-c80e187c7714 lvol 20
00:32:31.014   19:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=02a8b161-d56a-44ec-ab2b-2755edd1fa23
00:32:31.014   19:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:32:31.014   19:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 02a8b161-d56a-44ec-ab2b-2755edd1fa23
00:32:31.272   19:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420
00:32:31.531  [2024-12-13 19:17:03.275709] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:32:31.531   19:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420
00:32:31.791   19:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=123687
00:32:31.791   19:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18
00:32:31.791   19:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1
00:32:32.729    19:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 02a8b161-d56a-44ec-ab2b-2755edd1fa23 MY_SNAPSHOT
00:32:33.298   19:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d790b98b-0923-443c-b014-bd7f3456bc29
00:32:33.298   19:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 02a8b161-d56a-44ec-ab2b-2755edd1fa23 30
00:32:33.556    19:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone d790b98b-0923-443c-b014-bd7f3456bc29 MY_CLONE
00:32:33.814   19:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f110e4e9-bdf4-40c6-a0ae-2e7ed399b654
00:32:33.814   19:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate f110e4e9-bdf4-40c6-a0ae-2e7ed399b654
00:32:34.381   19:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 123687
00:32:42.502  Initializing NVMe Controllers
00:32:42.502  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0
00:32:42.502  Controller IO queue size 128, less than required.
00:32:42.502  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:32:42.502  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3
00:32:42.502  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4
00:32:42.502  Initialization complete. Launching workers.
00:32:42.502  ========================================================
00:32:42.502                                                                                                               Latency(us)
00:32:42.502  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:32:42.502  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core  3:   11584.70      45.25   11050.88    2450.68   79426.49
00:32:42.502  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core  4:   11631.70      45.44   11004.92     635.76   56817.64
00:32:42.502  ========================================================
00:32:42.502  Total                                                                    :   23216.40      90.69   11027.86     635.76   79426.49
00:32:42.502  
00:32:42.502   19:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:32:42.502   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 02a8b161-d56a-44ec-ab2b-2755edd1fa23
00:32:42.761   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 04f23610-b9c3-4a04-8516-c80e187c7714
00:32:42.761   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f
00:32:42.761   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT
00:32:42.761   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini
00:32:42.761   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup
00:32:42.761   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync
00:32:43.021   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:32:43.021   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e
00:32:43.021   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20}
00:32:43.021   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:32:43.021  rmmod nvme_tcp
00:32:43.021  rmmod nvme_fabrics
00:32:43.021  rmmod nvme_keyring
00:32:43.021   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:32:43.021   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e
00:32:43.021   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0
00:32:43.021   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 123557 ']'
00:32:43.021   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 123557
00:32:43.021   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 123557 ']'
00:32:43.021   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 123557
00:32:43.021    19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname
00:32:43.021   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:32:43.021    19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 123557
00:32:43.021   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:32:43.021   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:32:43.021  killing process with pid 123557
00:32:43.021   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 123557'
00:32:43.021   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 123557
00:32:43.021   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 123557
00:32:43.280   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:32:43.280   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:32:43.280   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:32:43.280   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr
00:32:43.280   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:32:43.280   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save
00:32:43.280   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore
00:32:43.280   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:32:43.280   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:32:43.280   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:32:43.280   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:32:43.280   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:32:43.280   19:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:32:43.280   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:32:43.280   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:32:43.280   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:32:43.280   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:32:43.280   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:32:43.280   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:32:43.539   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:32:43.539   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:32:43.539   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:32:43.539   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns
00:32:43.539   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:32:43.539   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:32:43.539    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:32:43.539  ************************************
00:32:43.539  END TEST nvmf_lvol
00:32:43.539  ************************************
00:32:43.539   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # return 0
00:32:43.539  
00:32:43.539  real	0m15.341s
00:32:43.539  user	0m55.205s
00:32:43.539  sys	0m5.910s
00:32:43.539   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable
00:32:43.539   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:32:43.539   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode
00:32:43.539   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:32:43.539   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:32:43.539   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:32:43.539  ************************************
00:32:43.539  START TEST nvmf_lvs_grow
00:32:43.539  ************************************
00:32:43.539   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode
00:32:43.539  * Looking for test storage...
00:32:43.539  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:32:43.539    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:32:43.539     19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version
00:32:43.539     19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-:
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-:
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<'
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 ))
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:32:43.800     19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1
00:32:43.800     19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1
00:32:43.800     19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:32:43.800     19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1
00:32:43.800     19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2
00:32:43.800     19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2
00:32:43.800     19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:32:43.800     19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:32:43.800  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:43.800  		--rc genhtml_branch_coverage=1
00:32:43.800  		--rc genhtml_function_coverage=1
00:32:43.800  		--rc genhtml_legend=1
00:32:43.800  		--rc geninfo_all_blocks=1
00:32:43.800  		--rc geninfo_unexecuted_blocks=1
00:32:43.800  		
00:32:43.800  		'
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:32:43.800  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:43.800  		--rc genhtml_branch_coverage=1
00:32:43.800  		--rc genhtml_function_coverage=1
00:32:43.800  		--rc genhtml_legend=1
00:32:43.800  		--rc geninfo_all_blocks=1
00:32:43.800  		--rc geninfo_unexecuted_blocks=1
00:32:43.800  		
00:32:43.800  		'
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:32:43.800  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:43.800  		--rc genhtml_branch_coverage=1
00:32:43.800  		--rc genhtml_function_coverage=1
00:32:43.800  		--rc genhtml_legend=1
00:32:43.800  		--rc geninfo_all_blocks=1
00:32:43.800  		--rc geninfo_unexecuted_blocks=1
00:32:43.800  		
00:32:43.800  		'
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:32:43.800  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:43.800  		--rc genhtml_branch_coverage=1
00:32:43.800  		--rc genhtml_function_coverage=1
00:32:43.800  		--rc genhtml_legend=1
00:32:43.800  		--rc geninfo_all_blocks=1
00:32:43.800  		--rc geninfo_unexecuted_blocks=1
00:32:43.800  		
00:32:43.800  		'
00:32:43.800   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:32:43.800     19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:32:43.800     19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:32:43.800    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:32:43.800     19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob
00:32:43.800     19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:32:43.800     19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:32:43.800     19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:32:43.801      19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:43.801      19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:43.801      19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:43.801      19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH
00:32:43.801      19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:43.801    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0
00:32:43.801    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:32:43.801    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:32:43.801    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:32:43.801    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:32:43.801    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:32:43.801    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:32:43.801    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:32:43.801    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:32:43.801    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:32:43.801    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:32:43.801    19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:32:43.801  Cannot find device "nvmf_init_br"
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:32:43.801  Cannot find device "nvmf_init_br2"
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:32:43.801  Cannot find device "nvmf_tgt_br"
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:32:43.801  Cannot find device "nvmf_tgt_br2"
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:32:43.801  Cannot find device "nvmf_init_br"
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:32:43.801  Cannot find device "nvmf_init_br2"
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:32:43.801  Cannot find device "nvmf_tgt_br"
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:32:43.801  Cannot find device "nvmf_tgt_br2"
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:32:43.801  Cannot find device "nvmf_br"
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:32:43.801  Cannot find device "nvmf_init_if"
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:32:43.801  Cannot find device "nvmf_init_if2"
00:32:43.801   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true
00:32:43.802   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:32:43.802  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:32:43.802   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true
00:32:43.802   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:32:43.802  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:32:43.802   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true
00:32:43.802   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:32:43.802   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:32:43.802   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:32:43.802   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:32:44.070  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:32:44.070  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms
00:32:44.070  
00:32:44.070  --- 10.0.0.3 ping statistics ---
00:32:44.070  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:44.070  rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:32:44.070  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:32:44.070  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms
00:32:44.070  
00:32:44.070  --- 10.0.0.4 ping statistics ---
00:32:44.070  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:44.070  rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:32:44.070  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:32:44.070  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms
00:32:44.070  
00:32:44.070  --- 10.0.0.1 ping statistics ---
00:32:44.070  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:44.070  rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:32:44.070  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:32:44.070  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms
00:32:44.070  
00:32:44.070  --- 10.0.0.2 ping statistics ---
00:32:44.070  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:44.070  rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=124111
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 124111
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 124111 ']'
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100
00:32:44.070  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:32:44.070   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable
00:32:44.071   19:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:32:44.343  [2024-12-13 19:17:15.935899] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:32:44.343  [2024-12-13 19:17:15.937138] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:32:44.343  [2024-12-13 19:17:15.937205] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:32:44.343  [2024-12-13 19:17:16.088324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:32:44.343  [2024-12-13 19:17:16.124015] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:32:44.343  [2024-12-13 19:17:16.124074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:32:44.343  [2024-12-13 19:17:16.124089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:32:44.343  [2024-12-13 19:17:16.124099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:32:44.343  [2024-12-13 19:17:16.124109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:32:44.343  [2024-12-13 19:17:16.124547] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:32:44.602  [2024-12-13 19:17:16.212717] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:32:44.602  [2024-12-13 19:17:16.212960] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:32:44.602   19:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:32:44.602   19:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0
00:32:44.602   19:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:32:44.602   19:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable
00:32:44.602   19:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:32:44.602   19:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:32:44.602   19:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:32:44.862  [2024-12-13 19:17:16.505452] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:32:44.862   19:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow
00:32:44.862   19:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:32:44.862   19:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable
00:32:44.862   19:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:32:44.862  ************************************
00:32:44.862  START TEST lvs_grow_clean
00:32:44.862  ************************************
00:32:44.862   19:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow
00:32:44.862   19:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol
00:32:44.862   19:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters
00:32:44.862   19:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid
00:32:44.862   19:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200
00:32:44.862   19:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400
00:32:44.862   19:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150
00:32:44.862   19:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:32:44.862   19:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:32:44.862    19:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:32:45.198   19:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev
00:32:45.198    19:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs
00:32:45.456   19:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7057e903-e767-45a6-a6a3-823a5b315d9e
00:32:45.456    19:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7057e903-e767-45a6-a6a3-823a5b315d9e
00:32:45.456    19:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters'
00:32:45.715   19:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49
00:32:45.715   19:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 ))
00:32:45.715    19:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7057e903-e767-45a6-a6a3-823a5b315d9e lvol 150
00:32:45.974   19:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a05969d1-ac49-43bb-8aa7-d2a7351635dd
00:32:45.974   19:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:32:45.974   19:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev
00:32:46.232  [2024-12-13 19:17:17.981349] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400
00:32:46.232  [2024-12-13 19:17:17.981505] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1
00:32:46.232  true
00:32:46.232    19:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7057e903-e767-45a6-a6a3-823a5b315d9e
00:32:46.232    19:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters'
00:32:46.491   19:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 ))
00:32:46.491   19:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:32:47.058   19:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a05969d1-ac49-43bb-8aa7-d2a7351635dd
00:32:47.058   19:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420
00:32:47.317  [2024-12-13 19:17:19.017831] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:32:47.317   19:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420
00:32:47.575   19:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=124254
00:32:47.575   19:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z
00:32:47.575   19:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:32:47.575   19:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 124254 /var/tmp/bdevperf.sock
00:32:47.575   19:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 124254 ']'
00:32:47.575   19:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:32:47.575   19:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100
00:32:47.575  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:32:47.575   19:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:32:47.575   19:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable
00:32:47.575   19:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x
00:32:47.575  [2024-12-13 19:17:19.316987] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:32:47.575  [2024-12-13 19:17:19.317100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124254 ]
00:32:47.834  [2024-12-13 19:17:19.471241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:32:47.834  [2024-12-13 19:17:19.517232] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:32:48.771   19:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:32:48.771   19:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0
00:32:48.771   19:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0
00:32:48.771  Nvme0n1
00:32:48.771   19:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000
00:32:49.031  [
00:32:49.031    {
00:32:49.031      "aliases": [
00:32:49.031        "a05969d1-ac49-43bb-8aa7-d2a7351635dd"
00:32:49.031      ],
00:32:49.031      "assigned_rate_limits": {
00:32:49.031        "r_mbytes_per_sec": 0,
00:32:49.031        "rw_ios_per_sec": 0,
00:32:49.031        "rw_mbytes_per_sec": 0,
00:32:49.031        "w_mbytes_per_sec": 0
00:32:49.031      },
00:32:49.031      "block_size": 4096,
00:32:49.031      "claimed": false,
00:32:49.031      "driver_specific": {
00:32:49.031        "mp_policy": "active_passive",
00:32:49.031        "nvme": [
00:32:49.031          {
00:32:49.031            "ctrlr_data": {
00:32:49.031              "ana_reporting": false,
00:32:49.031              "cntlid": 1,
00:32:49.031              "firmware_revision": "25.01",
00:32:49.031              "model_number": "SPDK bdev Controller",
00:32:49.031              "multi_ctrlr": true,
00:32:49.031              "oacs": {
00:32:49.031                "firmware": 0,
00:32:49.031                "format": 0,
00:32:49.031                "ns_manage": 0,
00:32:49.031                "security": 0
00:32:49.031              },
00:32:49.031              "serial_number": "SPDK0",
00:32:49.031              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:32:49.031              "vendor_id": "0x8086"
00:32:49.031            },
00:32:49.031            "ns_data": {
00:32:49.031              "can_share": true,
00:32:49.031              "id": 1
00:32:49.031            },
00:32:49.031            "trid": {
00:32:49.031              "adrfam": "IPv4",
00:32:49.031              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:32:49.031              "traddr": "10.0.0.3",
00:32:49.031              "trsvcid": "4420",
00:32:49.031              "trtype": "TCP"
00:32:49.031            },
00:32:49.031            "vs": {
00:32:49.031              "nvme_version": "1.3"
00:32:49.031            }
00:32:49.031          }
00:32:49.031        ]
00:32:49.031      },
00:32:49.031      "memory_domains": [
00:32:49.031        {
00:32:49.031          "dma_device_id": "system",
00:32:49.031          "dma_device_type": 1
00:32:49.031        }
00:32:49.031      ],
00:32:49.031      "name": "Nvme0n1",
00:32:49.031      "num_blocks": 38912,
00:32:49.031      "numa_id": -1,
00:32:49.031      "product_name": "NVMe disk",
00:32:49.031      "supported_io_types": {
00:32:49.031        "abort": true,
00:32:49.031        "compare": true,
00:32:49.031        "compare_and_write": true,
00:32:49.031        "copy": true,
00:32:49.031        "flush": true,
00:32:49.031        "get_zone_info": false,
00:32:49.031        "nvme_admin": true,
00:32:49.031        "nvme_io": true,
00:32:49.031        "nvme_io_md": false,
00:32:49.031        "nvme_iov_md": false,
00:32:49.031        "read": true,
00:32:49.031        "reset": true,
00:32:49.031        "seek_data": false,
00:32:49.031        "seek_hole": false,
00:32:49.031        "unmap": true,
00:32:49.031        "write": true,
00:32:49.031        "write_zeroes": true,
00:32:49.031        "zcopy": false,
00:32:49.031        "zone_append": false,
00:32:49.031        "zone_management": false
00:32:49.031      },
00:32:49.031      "uuid": "a05969d1-ac49-43bb-8aa7-d2a7351635dd",
00:32:49.031      "zoned": false
00:32:49.031    }
00:32:49.031  ]
00:32:49.031   19:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:32:49.031   19:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=124302
00:32:49.031   19:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2
00:32:49.290  Running I/O for 10 seconds...
00:32:50.225                                                                                                  Latency(us)
00:32:50.225  
[2024-12-13T19:17:22.049Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:32:50.225  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:32:50.225  	 Nvme0n1             :       1.00    6951.00      27.15       0.00     0.00       0.00       0.00       0.00
00:32:50.225  
[2024-12-13T19:17:22.049Z]  ===================================================================================================================
00:32:50.225  
[2024-12-13T19:17:22.049Z]  Total                       :               6951.00      27.15       0.00     0.00       0.00       0.00       0.00
00:32:50.225  
00:32:51.164   19:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7057e903-e767-45a6-a6a3-823a5b315d9e
00:32:51.164  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:32:51.164  	 Nvme0n1             :       2.00    7062.00      27.59       0.00     0.00       0.00       0.00       0.00
00:32:51.164  
[2024-12-13T19:17:22.988Z]  ===================================================================================================================
00:32:51.164  
[2024-12-13T19:17:22.988Z]  Total                       :               7062.00      27.59       0.00     0.00       0.00       0.00       0.00
00:32:51.164  
00:32:51.423  true
00:32:51.423    19:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7057e903-e767-45a6-a6a3-823a5b315d9e
00:32:51.423    19:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters'
00:32:51.991   19:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99
00:32:51.991   19:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 ))
00:32:51.991   19:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 124302
00:32:52.251  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:32:52.251  	 Nvme0n1             :       3.00    7103.67      27.75       0.00     0.00       0.00       0.00       0.00
00:32:52.251  
[2024-12-13T19:17:24.075Z]  ===================================================================================================================
00:32:52.251  
[2024-12-13T19:17:24.075Z]  Total                       :               7103.67      27.75       0.00     0.00       0.00       0.00       0.00
00:32:52.251  
00:32:53.185  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:32:53.185  	 Nvme0n1             :       4.00    7144.75      27.91       0.00     0.00       0.00       0.00       0.00
00:32:53.185  
[2024-12-13T19:17:25.009Z]  ===================================================================================================================
00:32:53.185  
[2024-12-13T19:17:25.009Z]  Total                       :               7144.75      27.91       0.00     0.00       0.00       0.00       0.00
00:32:53.185  
00:32:54.122  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:32:54.122  	 Nvme0n1             :       5.00    7183.20      28.06       0.00     0.00       0.00       0.00       0.00
00:32:54.122  
[2024-12-13T19:17:25.946Z]  ===================================================================================================================
00:32:54.122  
[2024-12-13T19:17:25.946Z]  Total                       :               7183.20      28.06       0.00     0.00       0.00       0.00       0.00
00:32:54.122  
00:32:55.500  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:32:55.500  	 Nvme0n1             :       6.00    7179.17      28.04       0.00     0.00       0.00       0.00       0.00
00:32:55.500  
[2024-12-13T19:17:27.324Z]  ===================================================================================================================
00:32:55.500  
[2024-12-13T19:17:27.324Z]  Total                       :               7179.17      28.04       0.00     0.00       0.00       0.00       0.00
00:32:55.500  
00:32:56.438  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:32:56.438  	 Nvme0n1             :       7.00    7158.86      27.96       0.00     0.00       0.00       0.00       0.00
00:32:56.438  
[2024-12-13T19:17:28.262Z]  ===================================================================================================================
00:32:56.438  
[2024-12-13T19:17:28.262Z]  Total                       :               7158.86      27.96       0.00     0.00       0.00       0.00       0.00
00:32:56.438  
00:32:57.375  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:32:57.375  	 Nvme0n1             :       8.00    7138.12      27.88       0.00     0.00       0.00       0.00       0.00
00:32:57.375  
[2024-12-13T19:17:29.199Z]  ===================================================================================================================
00:32:57.375  
[2024-12-13T19:17:29.199Z]  Total                       :               7138.12      27.88       0.00     0.00       0.00       0.00       0.00
00:32:57.375  
00:32:58.312  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:32:58.312  	 Nvme0n1             :       9.00    7119.56      27.81       0.00     0.00       0.00       0.00       0.00
00:32:58.312  
[2024-12-13T19:17:30.136Z]  ===================================================================================================================
00:32:58.312  
[2024-12-13T19:17:30.136Z]  Total                       :               7119.56      27.81       0.00     0.00       0.00       0.00       0.00
00:32:58.312  
00:32:59.250  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:32:59.250  	 Nvme0n1             :      10.00    7121.60      27.82       0.00     0.00       0.00       0.00       0.00
00:32:59.250  
[2024-12-13T19:17:31.074Z]  ===================================================================================================================
00:32:59.250  
[2024-12-13T19:17:31.074Z]  Total                       :               7121.60      27.82       0.00     0.00       0.00       0.00       0.00
00:32:59.250  
00:32:59.250  
00:32:59.250                                                                                                  Latency(us)
00:32:59.250  
[2024-12-13T19:17:31.074Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:32:59.250  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:32:59.250  	 Nvme0n1             :      10.01    7126.26      27.84       0.00     0.00   17950.92    8579.26   66727.56
00:32:59.250  
[2024-12-13T19:17:31.074Z]  ===================================================================================================================
00:32:59.250  
[2024-12-13T19:17:31.074Z]  Total                       :               7126.26      27.84       0.00     0.00   17950.92    8579.26   66727.56
00:32:59.250  {
00:32:59.250    "results": [
00:32:59.250      {
00:32:59.250        "job": "Nvme0n1",
00:32:59.250        "core_mask": "0x2",
00:32:59.250        "workload": "randwrite",
00:32:59.250        "status": "finished",
00:32:59.250        "queue_depth": 128,
00:32:59.250        "io_size": 4096,
00:32:59.250        "runtime": 10.011418,
00:32:59.250        "iops": 7126.26323264097,
00:32:59.250        "mibps": 27.83696575250379,
00:32:59.250        "io_failed": 0,
00:32:59.250        "io_timeout": 0,
00:32:59.250        "avg_latency_us": 17950.92188138392,
00:32:59.250        "min_latency_us": 8579.258181818182,
00:32:59.250        "max_latency_us": 66727.56363636363
00:32:59.250      }
00:32:59.250    ],
00:32:59.250    "core_count": 1
00:32:59.250  }
00:32:59.250   19:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 124254
00:32:59.250   19:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 124254 ']'
00:32:59.250   19:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 124254
00:32:59.250    19:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname
00:32:59.250   19:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:32:59.250    19:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 124254
00:32:59.250   19:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:32:59.250   19:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:32:59.250  killing process with pid 124254
00:32:59.250   19:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 124254'
00:32:59.250   19:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 124254
00:32:59.250  Received shutdown signal, test time was about 10.000000 seconds
00:32:59.250  
00:32:59.250                                                                                                  Latency(us)
00:32:59.251  
[2024-12-13T19:17:31.075Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:32:59.251  
[2024-12-13T19:17:31.075Z]  ===================================================================================================================
00:32:59.251  
[2024-12-13T19:17:31.075Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:32:59.251   19:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 124254
00:32:59.510   19:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420
00:32:59.769   19:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:33:00.033    19:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters'
00:33:00.033    19:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7057e903-e767-45a6-a6a3-823a5b315d9e
00:33:00.299   19:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61
00:33:00.299   19:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]]
00:33:00.299   19:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:33:00.299  [2024-12-13 19:17:32.097405] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs
00:33:00.558   19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7057e903-e767-45a6-a6a3-823a5b315d9e
00:33:00.558   19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0
00:33:00.558   19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7057e903-e767-45a6-a6a3-823a5b315d9e
00:33:00.558   19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:33:00.558   19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:33:00.558    19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:33:00.558   19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:33:00.558    19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:33:00.558   19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:33:00.559   19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:33:00.559   19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:33:00.559   19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7057e903-e767-45a6-a6a3-823a5b315d9e
00:33:00.559  2024/12/13 19:17:32 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:7057e903-e767-45a6-a6a3-823a5b315d9e], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device
00:33:00.559  request:
00:33:00.559  {
00:33:00.559    "method": "bdev_lvol_get_lvstores",
00:33:00.559    "params": {
00:33:00.559      "uuid": "7057e903-e767-45a6-a6a3-823a5b315d9e"
00:33:00.559    }
00:33:00.559  }
00:33:00.559  Got JSON-RPC error response
00:33:00.559  GoRPCClient: error on JSON-RPC call
00:33:00.559   19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1
00:33:00.559   19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:33:00.559   19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:33:00.559   19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:33:00.559   19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:33:01.127  aio_bdev
00:33:01.127   19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a05969d1-ac49-43bb-8aa7-d2a7351635dd
00:33:01.127   19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=a05969d1-ac49-43bb-8aa7-d2a7351635dd
00:33:01.127   19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:33:01.127   19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i
00:33:01.127   19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:33:01.127   19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:33:01.127   19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine
00:33:01.127   19:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a05969d1-ac49-43bb-8aa7-d2a7351635dd -t 2000
00:33:01.385  [
00:33:01.385    {
00:33:01.385      "aliases": [
00:33:01.385        "lvs/lvol"
00:33:01.385      ],
00:33:01.385      "assigned_rate_limits": {
00:33:01.385        "r_mbytes_per_sec": 0,
00:33:01.385        "rw_ios_per_sec": 0,
00:33:01.385        "rw_mbytes_per_sec": 0,
00:33:01.385        "w_mbytes_per_sec": 0
00:33:01.385      },
00:33:01.385      "block_size": 4096,
00:33:01.385      "claimed": false,
00:33:01.385      "driver_specific": {
00:33:01.385        "lvol": {
00:33:01.386          "base_bdev": "aio_bdev",
00:33:01.386          "clone": false,
00:33:01.386          "esnap_clone": false,
00:33:01.386          "lvol_store_uuid": "7057e903-e767-45a6-a6a3-823a5b315d9e",
00:33:01.386          "num_allocated_clusters": 38,
00:33:01.386          "snapshot": false,
00:33:01.386          "thin_provision": false
00:33:01.386        }
00:33:01.386      },
00:33:01.386      "name": "a05969d1-ac49-43bb-8aa7-d2a7351635dd",
00:33:01.386      "num_blocks": 38912,
00:33:01.386      "product_name": "Logical Volume",
00:33:01.386      "supported_io_types": {
00:33:01.386        "abort": false,
00:33:01.386        "compare": false,
00:33:01.386        "compare_and_write": false,
00:33:01.386        "copy": false,
00:33:01.386        "flush": false,
00:33:01.386        "get_zone_info": false,
00:33:01.386        "nvme_admin": false,
00:33:01.386        "nvme_io": false,
00:33:01.386        "nvme_io_md": false,
00:33:01.386        "nvme_iov_md": false,
00:33:01.386        "read": true,
00:33:01.386        "reset": true,
00:33:01.386        "seek_data": true,
00:33:01.386        "seek_hole": true,
00:33:01.386        "unmap": true,
00:33:01.386        "write": true,
00:33:01.386        "write_zeroes": true,
00:33:01.386        "zcopy": false,
00:33:01.386        "zone_append": false,
00:33:01.386        "zone_management": false
00:33:01.386      },
00:33:01.386      "uuid": "a05969d1-ac49-43bb-8aa7-d2a7351635dd",
00:33:01.386      "zoned": false
00:33:01.386    }
00:33:01.386  ]
00:33:01.386   19:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0
00:33:01.386    19:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7057e903-e767-45a6-a6a3-823a5b315d9e
00:33:01.386    19:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters'
00:33:01.644   19:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 ))
00:33:01.644    19:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7057e903-e767-45a6-a6a3-823a5b315d9e
00:33:01.644    19:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters'
00:33:01.902   19:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 ))
00:33:01.902   19:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a05969d1-ac49-43bb-8aa7-d2a7351635dd
00:33:02.161   19:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7057e903-e767-45a6-a6a3-823a5b315d9e
00:33:02.420   19:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:33:02.678   19:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:33:02.937  
00:33:02.937  real	0m18.145s
00:33:02.937  user	0m17.500s
00:33:02.937  sys	0m2.192s
00:33:02.937   19:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable
00:33:02.937   19:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x
00:33:02.937  ************************************
00:33:02.937  END TEST lvs_grow_clean
00:33:02.937  ************************************
00:33:02.937   19:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty
00:33:02.937   19:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:33:02.937   19:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable
00:33:02.937   19:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:33:02.937  ************************************
00:33:02.937  START TEST lvs_grow_dirty
00:33:02.937  ************************************
00:33:02.937   19:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty
00:33:02.937   19:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol
00:33:02.937   19:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters
00:33:02.937   19:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid
00:33:02.937   19:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200
00:33:02.937   19:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400
00:33:02.937   19:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150
00:33:02.937   19:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:33:02.937   19:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:33:02.937    19:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:33:03.505   19:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev
00:33:03.505    19:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs
00:33:03.763   19:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=507b6116-ab92-4575-a9cf-73de4214b659
00:33:03.763    19:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters'
00:33:03.763    19:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 507b6116-ab92-4575-a9cf-73de4214b659
00:33:03.763   19:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49
00:33:03.763   19:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 ))
00:33:03.763    19:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 507b6116-ab92-4575-a9cf-73de4214b659 lvol 150
00:33:04.331   19:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8b67ec55-88c2-4561-ad0c-26eeab5e8228
00:33:04.331   19:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:33:04.331   19:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev
00:33:04.331  [2024-12-13 19:17:36.105197] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400
00:33:04.331  [2024-12-13 19:17:36.106151] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1
00:33:04.331  true
00:33:04.331    19:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 507b6116-ab92-4575-a9cf-73de4214b659
00:33:04.331    19:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters'
00:33:04.589   19:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 ))
00:33:04.589   19:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:33:04.848   19:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8b67ec55-88c2-4561-ad0c-26eeab5e8228
00:33:05.108   19:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420
00:33:05.367  [2024-12-13 19:17:37.093683] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:33:05.367   19:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420
00:33:05.626   19:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=124687
00:33:05.626   19:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:33:05.626   19:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z
00:33:05.626   19:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 124687 /var/tmp/bdevperf.sock
00:33:05.626   19:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 124687 ']'
00:33:05.626   19:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:33:05.626   19:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100
00:33:05.626  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:33:05.626   19:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:33:05.626   19:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable
00:33:05.626   19:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:33:05.884  [2024-12-13 19:17:37.460197] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:33:05.884  [2024-12-13 19:17:37.460291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124687 ]
00:33:05.884  [2024-12-13 19:17:37.609960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:33:05.884  [2024-12-13 19:17:37.660566] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:33:06.821   19:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:33:06.821   19:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0
00:33:06.821   19:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0
00:33:07.080  Nvme0n1
00:33:07.080   19:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000
00:33:07.080  [
00:33:07.080    {
00:33:07.080      "aliases": [
00:33:07.080        "8b67ec55-88c2-4561-ad0c-26eeab5e8228"
00:33:07.080      ],
00:33:07.080      "assigned_rate_limits": {
00:33:07.080        "r_mbytes_per_sec": 0,
00:33:07.080        "rw_ios_per_sec": 0,
00:33:07.080        "rw_mbytes_per_sec": 0,
00:33:07.080        "w_mbytes_per_sec": 0
00:33:07.080      },
00:33:07.080      "block_size": 4096,
00:33:07.080      "claimed": false,
00:33:07.080      "driver_specific": {
00:33:07.080        "mp_policy": "active_passive",
00:33:07.080        "nvme": [
00:33:07.080          {
00:33:07.080            "ctrlr_data": {
00:33:07.080              "ana_reporting": false,
00:33:07.080              "cntlid": 1,
00:33:07.080              "firmware_revision": "25.01",
00:33:07.080              "model_number": "SPDK bdev Controller",
00:33:07.080              "multi_ctrlr": true,
00:33:07.080              "oacs": {
00:33:07.080                "firmware": 0,
00:33:07.080                "format": 0,
00:33:07.080                "ns_manage": 0,
00:33:07.080                "security": 0
00:33:07.080              },
00:33:07.080              "serial_number": "SPDK0",
00:33:07.080              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:33:07.080              "vendor_id": "0x8086"
00:33:07.080            },
00:33:07.080            "ns_data": {
00:33:07.080              "can_share": true,
00:33:07.080              "id": 1
00:33:07.080            },
00:33:07.080            "trid": {
00:33:07.080              "adrfam": "IPv4",
00:33:07.080              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:33:07.080              "traddr": "10.0.0.3",
00:33:07.080              "trsvcid": "4420",
00:33:07.080              "trtype": "TCP"
00:33:07.080            },
00:33:07.080            "vs": {
00:33:07.080              "nvme_version": "1.3"
00:33:07.080            }
00:33:07.080          }
00:33:07.080        ]
00:33:07.080      },
00:33:07.080      "memory_domains": [
00:33:07.080        {
00:33:07.080          "dma_device_id": "system",
00:33:07.080          "dma_device_type": 1
00:33:07.080        }
00:33:07.080      ],
00:33:07.080      "name": "Nvme0n1",
00:33:07.080      "num_blocks": 38912,
00:33:07.080      "numa_id": -1,
00:33:07.080      "product_name": "NVMe disk",
00:33:07.080      "supported_io_types": {
00:33:07.080        "abort": true,
00:33:07.080        "compare": true,
00:33:07.080        "compare_and_write": true,
00:33:07.080        "copy": true,
00:33:07.080        "flush": true,
00:33:07.080        "get_zone_info": false,
00:33:07.080        "nvme_admin": true,
00:33:07.080        "nvme_io": true,
00:33:07.080        "nvme_io_md": false,
00:33:07.080        "nvme_iov_md": false,
00:33:07.080        "read": true,
00:33:07.080        "reset": true,
00:33:07.080        "seek_data": false,
00:33:07.080        "seek_hole": false,
00:33:07.080        "unmap": true,
00:33:07.080        "write": true,
00:33:07.080        "write_zeroes": true,
00:33:07.080        "zcopy": false,
00:33:07.080        "zone_append": false,
00:33:07.080        "zone_management": false
00:33:07.080      },
00:33:07.080      "uuid": "8b67ec55-88c2-4561-ad0c-26eeab5e8228",
00:33:07.080      "zoned": false
00:33:07.080    }
00:33:07.080  ]
00:33:07.339   19:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:33:07.339   19:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=124729
00:33:07.339   19:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2
00:33:07.340  Running I/O for 10 seconds...
00:33:08.277                                                                                                  Latency(us)
00:33:08.277  
[2024-12-13T19:17:40.101Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:33:08.277  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:33:08.277  	 Nvme0n1             :       1.00    7310.00      28.55       0.00     0.00       0.00       0.00       0.00
00:33:08.277  
[2024-12-13T19:17:40.101Z]  ===================================================================================================================
00:33:08.277  
[2024-12-13T19:17:40.101Z]  Total                       :               7310.00      28.55       0.00     0.00       0.00       0.00       0.00
00:33:08.277  
00:33:09.212   19:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 507b6116-ab92-4575-a9cf-73de4214b659
00:33:09.212  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:33:09.212  	 Nvme0n1             :       2.00    7297.50      28.51       0.00     0.00       0.00       0.00       0.00
00:33:09.212  
[2024-12-13T19:17:41.036Z]  ===================================================================================================================
00:33:09.212  
[2024-12-13T19:17:41.036Z]  Total                       :               7297.50      28.51       0.00     0.00       0.00       0.00       0.00
00:33:09.212  
00:33:09.471  true
00:33:09.471    19:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 507b6116-ab92-4575-a9cf-73de4214b659
00:33:09.471    19:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters'
00:33:10.038   19:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99
00:33:10.038   19:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 ))
00:33:10.038   19:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 124729
00:33:10.297  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:33:10.297  	 Nvme0n1             :       3.00    7294.33      28.49       0.00     0.00       0.00       0.00       0.00
00:33:10.297  
[2024-12-13T19:17:42.121Z]  ===================================================================================================================
00:33:10.297  
[2024-12-13T19:17:42.121Z]  Total                       :               7294.33      28.49       0.00     0.00       0.00       0.00       0.00
00:33:10.297  
00:33:11.233  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:33:11.233  	 Nvme0n1             :       4.00    7326.50      28.62       0.00     0.00       0.00       0.00       0.00
00:33:11.233  
[2024-12-13T19:17:43.057Z]  ===================================================================================================================
00:33:11.233  
[2024-12-13T19:17:43.057Z]  Total                       :               7326.50      28.62       0.00     0.00       0.00       0.00       0.00
00:33:11.233  
00:33:12.609  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:33:12.609  	 Nvme0n1             :       5.00    7342.80      28.68       0.00     0.00       0.00       0.00       0.00
00:33:12.609  
[2024-12-13T19:17:44.433Z]  ===================================================================================================================
00:33:12.609  
[2024-12-13T19:17:44.433Z]  Total                       :               7342.80      28.68       0.00     0.00       0.00       0.00       0.00
00:33:12.609  
00:33:13.545  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:33:13.545  	 Nvme0n1             :       6.00    7345.50      28.69       0.00     0.00       0.00       0.00       0.00
00:33:13.545  
[2024-12-13T19:17:45.370Z]  ===================================================================================================================
00:33:13.546  
[2024-12-13T19:17:45.370Z]  Total                       :               7345.50      28.69       0.00     0.00       0.00       0.00       0.00
00:33:13.546  
00:33:14.483  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:33:14.483  	 Nvme0n1             :       7.00    7335.57      28.65       0.00     0.00       0.00       0.00       0.00
00:33:14.483  
[2024-12-13T19:17:46.307Z]  ===================================================================================================================
00:33:14.483  
[2024-12-13T19:17:46.307Z]  Total                       :               7335.57      28.65       0.00     0.00       0.00       0.00       0.00
00:33:14.483  
00:33:15.449  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:33:15.449  	 Nvme0n1             :       8.00    7034.38      27.48       0.00     0.00       0.00       0.00       0.00
00:33:15.449  
[2024-12-13T19:17:47.273Z]  ===================================================================================================================
00:33:15.449  
[2024-12-13T19:17:47.273Z]  Total                       :               7034.38      27.48       0.00     0.00       0.00       0.00       0.00
00:33:15.449  
00:33:16.385  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:33:16.385  	 Nvme0n1             :       9.00    7010.78      27.39       0.00     0.00       0.00       0.00       0.00
00:33:16.385  
[2024-12-13T19:17:48.209Z]  ===================================================================================================================
00:33:16.385  
[2024-12-13T19:17:48.209Z]  Total                       :               7010.78      27.39       0.00     0.00       0.00       0.00       0.00
00:33:16.385  
00:33:17.320  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:33:17.320  	 Nvme0n1             :      10.00    6984.30      27.28       0.00     0.00       0.00       0.00       0.00
00:33:17.320  
[2024-12-13T19:17:49.144Z]  ===================================================================================================================
00:33:17.320  
[2024-12-13T19:17:49.144Z]  Total                       :               6984.30      27.28       0.00     0.00       0.00       0.00       0.00
00:33:17.320  
00:33:17.320  
00:33:17.320                                                                                                  Latency(us)
00:33:17.320  
[2024-12-13T19:17:49.144Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:33:17.320  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:33:17.320  	 Nvme0n1             :      10.01    6986.83      27.29       0.00     0.00   18314.31    8340.95  318385.80
00:33:17.320  
[2024-12-13T19:17:49.144Z]  ===================================================================================================================
00:33:17.320  
[2024-12-13T19:17:49.144Z]  Total                       :               6986.83      27.29       0.00     0.00   18314.31    8340.95  318385.80
00:33:17.320  {
00:33:17.320    "results": [
00:33:17.320      {
00:33:17.320        "job": "Nvme0n1",
00:33:17.320        "core_mask": "0x2",
00:33:17.320        "workload": "randwrite",
00:33:17.320        "status": "finished",
00:33:17.320        "queue_depth": 128,
00:33:17.320        "io_size": 4096,
00:33:17.320        "runtime": 10.014702,
00:33:17.321        "iops": 6986.827965525085,
00:33:17.321        "mibps": 27.292296740332365,
00:33:17.321        "io_failed": 0,
00:33:17.321        "io_timeout": 0,
00:33:17.321        "avg_latency_us": 18314.31401798927,
00:33:17.321        "min_latency_us": 8340.945454545454,
00:33:17.321        "max_latency_us": 318385.80363636365
00:33:17.321      }
00:33:17.321    ],
00:33:17.321    "core_count": 1
00:33:17.321  }
00:33:17.321   19:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 124687
00:33:17.321   19:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 124687 ']'
00:33:17.321   19:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 124687
00:33:17.321    19:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname
00:33:17.321   19:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:33:17.321    19:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 124687
00:33:17.321   19:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:33:17.321  killing process with pid 124687
00:33:17.321   19:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:33:17.321   19:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 124687'
00:33:17.321  Received shutdown signal, test time was about 10.000000 seconds
00:33:17.321  
00:33:17.321                                                                                                  Latency(us)
00:33:17.321  
[2024-12-13T19:17:49.145Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:33:17.321  
[2024-12-13T19:17:49.145Z]  ===================================================================================================================
00:33:17.321  
[2024-12-13T19:17:49.145Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:33:17.321   19:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 124687
00:33:17.321   19:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 124687
00:33:17.579   19:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420
00:33:17.837   19:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:33:18.094    19:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 507b6116-ab92-4575-a9cf-73de4214b659
00:33:18.095    19:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters'
00:33:18.353   19:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61
00:33:18.353   19:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]]
00:33:18.353   19:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 124111
00:33:18.353   19:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 124111
00:33:18.353  /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 124111 Killed                  "${NVMF_APP[@]}" "$@"
00:33:18.353   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true
00:33:18.353   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1
00:33:18.353   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:33:18.353   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable
00:33:18.353   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:33:18.353   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=124878
00:33:18.353   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1
00:33:18.353   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 124878
00:33:18.353   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 124878 ']'
00:33:18.353   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:33:18.353   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100
00:33:18.353  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:33:18.353   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:33:18.353   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable
00:33:18.353   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:33:18.353  [2024-12-13 19:17:50.092289] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:33:18.353  [2024-12-13 19:17:50.093283] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:33:18.353  [2024-12-13 19:17:50.093384] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:33:18.612  [2024-12-13 19:17:50.244550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:33:18.612  [2024-12-13 19:17:50.284427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:33:18.612  [2024-12-13 19:17:50.284725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:33:18.612  [2024-12-13 19:17:50.284844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:33:18.612  [2024-12-13 19:17:50.284942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:33:18.612  [2024-12-13 19:17:50.285032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:33:18.612  [2024-12-13 19:17:50.285581] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:33:18.612  [2024-12-13 19:17:50.383319] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:33:18.612  [2024-12-13 19:17:50.383983] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:33:18.612   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:33:18.612   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0
00:33:18.612   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:33:18.612   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable
00:33:18.612   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:33:18.870   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:33:18.870    19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:33:19.129  [2024-12-13 19:17:50.747000] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore
00:33:19.129  [2024-12-13 19:17:50.750326] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0
00:33:19.129  [2024-12-13 19:17:50.750771] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1
00:33:19.129   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev
00:33:19.129   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8b67ec55-88c2-4561-ad0c-26eeab5e8228
00:33:19.129   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8b67ec55-88c2-4561-ad0c-26eeab5e8228
00:33:19.129   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:33:19.129   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i
00:33:19.129   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:33:19.129   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:33:19.129   19:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine
00:33:19.388   19:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8b67ec55-88c2-4561-ad0c-26eeab5e8228 -t 2000
00:33:19.646  [
00:33:19.646    {
00:33:19.646      "aliases": [
00:33:19.646        "lvs/lvol"
00:33:19.646      ],
00:33:19.646      "assigned_rate_limits": {
00:33:19.646        "r_mbytes_per_sec": 0,
00:33:19.646        "rw_ios_per_sec": 0,
00:33:19.646        "rw_mbytes_per_sec": 0,
00:33:19.646        "w_mbytes_per_sec": 0
00:33:19.646      },
00:33:19.646      "block_size": 4096,
00:33:19.646      "claimed": false,
00:33:19.646      "driver_specific": {
00:33:19.646        "lvol": {
00:33:19.646          "base_bdev": "aio_bdev",
00:33:19.646          "clone": false,
00:33:19.646          "esnap_clone": false,
00:33:19.646          "lvol_store_uuid": "507b6116-ab92-4575-a9cf-73de4214b659",
00:33:19.646          "num_allocated_clusters": 38,
00:33:19.646          "snapshot": false,
00:33:19.646          "thin_provision": false
00:33:19.646        }
00:33:19.646      },
00:33:19.646      "name": "8b67ec55-88c2-4561-ad0c-26eeab5e8228",
00:33:19.646      "num_blocks": 38912,
00:33:19.646      "product_name": "Logical Volume",
00:33:19.646      "supported_io_types": {
00:33:19.646        "abort": false,
00:33:19.646        "compare": false,
00:33:19.646        "compare_and_write": false,
00:33:19.646        "copy": false,
00:33:19.646        "flush": false,
00:33:19.646        "get_zone_info": false,
00:33:19.646        "nvme_admin": false,
00:33:19.646        "nvme_io": false,
00:33:19.646        "nvme_io_md": false,
00:33:19.646        "nvme_iov_md": false,
00:33:19.646        "read": true,
00:33:19.646        "reset": true,
00:33:19.646        "seek_data": true,
00:33:19.646        "seek_hole": true,
00:33:19.646        "unmap": true,
00:33:19.646        "write": true,
00:33:19.646        "write_zeroes": true,
00:33:19.646        "zcopy": false,
00:33:19.646        "zone_append": false,
00:33:19.646        "zone_management": false
00:33:19.646      },
00:33:19.646      "uuid": "8b67ec55-88c2-4561-ad0c-26eeab5e8228",
00:33:19.646      "zoned": false
00:33:19.646    }
00:33:19.646  ]
00:33:19.646   19:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0
00:33:19.647    19:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters'
00:33:19.647    19:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 507b6116-ab92-4575-a9cf-73de4214b659
00:33:19.905   19:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 ))
00:33:19.905    19:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 507b6116-ab92-4575-a9cf-73de4214b659
00:33:19.905    19:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters'
00:33:20.164   19:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 ))
00:33:20.164   19:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:33:20.423  [2024-12-13 19:17:52.078535] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs
00:33:20.423   19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 507b6116-ab92-4575-a9cf-73de4214b659
00:33:20.423   19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0
00:33:20.423   19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 507b6116-ab92-4575-a9cf-73de4214b659
00:33:20.423   19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:33:20.423   19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:33:20.423    19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:33:20.423   19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:33:20.423    19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:33:20.423   19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:33:20.423   19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:33:20.423   19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:33:20.423   19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 507b6116-ab92-4575-a9cf-73de4214b659
00:33:20.681  2024/12/13 19:17:52 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:507b6116-ab92-4575-a9cf-73de4214b659], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device
00:33:20.682  request:
00:33:20.682  {
00:33:20.682    "method": "bdev_lvol_get_lvstores",
00:33:20.682    "params": {
00:33:20.682      "uuid": "507b6116-ab92-4575-a9cf-73de4214b659"
00:33:20.682    }
00:33:20.682  }
00:33:20.682  Got JSON-RPC error response
00:33:20.682  GoRPCClient: error on JSON-RPC call
00:33:20.682   19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1
00:33:20.682   19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:33:20.682   19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:33:20.682   19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:33:20.682   19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:33:20.940  aio_bdev
00:33:20.940   19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8b67ec55-88c2-4561-ad0c-26eeab5e8228
00:33:20.940   19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8b67ec55-88c2-4561-ad0c-26eeab5e8228
00:33:20.940   19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:33:20.940   19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i
00:33:20.940   19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:33:20.940   19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:33:20.940   19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine
00:33:21.199   19:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8b67ec55-88c2-4561-ad0c-26eeab5e8228 -t 2000
00:33:21.458  [
00:33:21.458    {
00:33:21.458      "aliases": [
00:33:21.458        "lvs/lvol"
00:33:21.458      ],
00:33:21.458      "assigned_rate_limits": {
00:33:21.458        "r_mbytes_per_sec": 0,
00:33:21.458        "rw_ios_per_sec": 0,
00:33:21.458        "rw_mbytes_per_sec": 0,
00:33:21.458        "w_mbytes_per_sec": 0
00:33:21.458      },
00:33:21.458      "block_size": 4096,
00:33:21.458      "claimed": false,
00:33:21.458      "driver_specific": {
00:33:21.458        "lvol": {
00:33:21.458          "base_bdev": "aio_bdev",
00:33:21.458          "clone": false,
00:33:21.458          "esnap_clone": false,
00:33:21.458          "lvol_store_uuid": "507b6116-ab92-4575-a9cf-73de4214b659",
00:33:21.458          "num_allocated_clusters": 38,
00:33:21.458          "snapshot": false,
00:33:21.458          "thin_provision": false
00:33:21.458        }
00:33:21.458      },
00:33:21.458      "name": "8b67ec55-88c2-4561-ad0c-26eeab5e8228",
00:33:21.458      "num_blocks": 38912,
00:33:21.458      "product_name": "Logical Volume",
00:33:21.458      "supported_io_types": {
00:33:21.458        "abort": false,
00:33:21.458        "compare": false,
00:33:21.458        "compare_and_write": false,
00:33:21.458        "copy": false,
00:33:21.458        "flush": false,
00:33:21.458        "get_zone_info": false,
00:33:21.458        "nvme_admin": false,
00:33:21.458        "nvme_io": false,
00:33:21.458        "nvme_io_md": false,
00:33:21.458        "nvme_iov_md": false,
00:33:21.458        "read": true,
00:33:21.458        "reset": true,
00:33:21.458        "seek_data": true,
00:33:21.458        "seek_hole": true,
00:33:21.458        "unmap": true,
00:33:21.458        "write": true,
00:33:21.458        "write_zeroes": true,
00:33:21.458        "zcopy": false,
00:33:21.458        "zone_append": false,
00:33:21.458        "zone_management": false
00:33:21.458      },
00:33:21.458      "uuid": "8b67ec55-88c2-4561-ad0c-26eeab5e8228",
00:33:21.458      "zoned": false
00:33:21.458    }
00:33:21.458  ]
00:33:21.458   19:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0
00:33:21.458    19:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters'
00:33:21.458    19:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 507b6116-ab92-4575-a9cf-73de4214b659
00:33:21.717   19:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 ))
00:33:21.717    19:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 507b6116-ab92-4575-a9cf-73de4214b659
00:33:21.717    19:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters'
00:33:21.975   19:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 ))
00:33:21.975   19:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8b67ec55-88c2-4561-ad0c-26eeab5e8228
00:33:22.234   19:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 507b6116-ab92-4575-a9cf-73de4214b659
00:33:22.234   19:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:33:22.493   19:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev
00:33:23.060  
00:33:23.060  real	0m19.897s
00:33:23.060  user	0m26.273s
00:33:23.060  sys	0m9.496s
00:33:23.060   19:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable
00:33:23.060   19:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:33:23.060  ************************************
00:33:23.060  END TEST lvs_grow_dirty
00:33:23.060  ************************************
00:33:23.060   19:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0
00:33:23.060   19:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id
00:33:23.060   19:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0
00:33:23.060   19:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']'
00:33:23.060    19:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n'
00:33:23.060   19:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0
00:33:23.060   19:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]]
00:33:23.060   19:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files
00:33:23.060   19:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0
00:33:23.060  nvmf_trace.0
00:33:23.060   19:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0
00:33:23.060   19:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini
00:33:23.060   19:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup
00:33:23.060   19:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync
00:33:23.995   19:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:33:23.995   19:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e
00:33:23.995   19:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20}
00:33:23.995   19:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:33:23.995  rmmod nvme_tcp
00:33:23.995  rmmod nvme_fabrics
00:33:23.995  rmmod nvme_keyring
00:33:23.995   19:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:33:23.995   19:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e
00:33:23.995   19:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0
00:33:23.995   19:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 124878 ']'
00:33:23.995   19:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 124878
00:33:23.995   19:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 124878 ']'
00:33:23.995   19:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 124878
00:33:23.995    19:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname
00:33:23.995   19:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:33:23.995    19:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 124878
00:33:23.995   19:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:33:23.995   19:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:33:23.995  killing process with pid 124878
00:33:23.995   19:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 124878'
00:33:23.995   19:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 124878
00:33:23.995   19:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 124878
00:33:24.254   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:33:24.254   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:33:24.254   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:33:24.254   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr
00:33:24.254   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save
00:33:24.254   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:33:24.254   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore
00:33:24.254   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:33:24.254   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:33:24.254   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:33:24.254   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:33:24.254   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:33:24.254   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:33:24.513   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:33:24.513   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:33:24.513   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:33:24.513   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:33:24.513   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:33:24.513   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:33:24.513   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:33:24.513   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:33:24.513   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:33:24.513   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns
00:33:24.513   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:33:24.513   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:33:24.513    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:33:24.513   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0
00:33:24.513  
00:33:24.513  real	0m41.016s
00:33:24.513  user	0m44.959s
00:33:24.513  sys	0m13.371s
00:33:24.513   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable
00:33:24.513  ************************************
00:33:24.513  END TEST nvmf_lvs_grow
00:33:24.513  ************************************
00:33:24.513   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:33:24.513   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode
00:33:24.513   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:33:24.513   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:33:24.513   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:33:24.513  ************************************
00:33:24.513  START TEST nvmf_bdev_io_wait
00:33:24.513  ************************************
00:33:24.513   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode
00:33:24.774  * Looking for test storage...
00:33:24.774  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:33:24.774     19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version
00:33:24.774     19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-:
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-:
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<'
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 ))
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:33:24.774     19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1
00:33:24.774     19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1
00:33:24.774     19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:33:24.774     19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1
00:33:24.774     19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2
00:33:24.774     19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2
00:33:24.774     19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:33:24.774     19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:33:24.774  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:24.774  		--rc genhtml_branch_coverage=1
00:33:24.774  		--rc genhtml_function_coverage=1
00:33:24.774  		--rc genhtml_legend=1
00:33:24.774  		--rc geninfo_all_blocks=1
00:33:24.774  		--rc geninfo_unexecuted_blocks=1
00:33:24.774  		
00:33:24.774  		'
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:33:24.774  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:24.774  		--rc genhtml_branch_coverage=1
00:33:24.774  		--rc genhtml_function_coverage=1
00:33:24.774  		--rc genhtml_legend=1
00:33:24.774  		--rc geninfo_all_blocks=1
00:33:24.774  		--rc geninfo_unexecuted_blocks=1
00:33:24.774  		
00:33:24.774  		'
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:33:24.774  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:24.774  		--rc genhtml_branch_coverage=1
00:33:24.774  		--rc genhtml_function_coverage=1
00:33:24.774  		--rc genhtml_legend=1
00:33:24.774  		--rc geninfo_all_blocks=1
00:33:24.774  		--rc geninfo_unexecuted_blocks=1
00:33:24.774  		
00:33:24.774  		'
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:33:24.774  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:24.774  		--rc genhtml_branch_coverage=1
00:33:24.774  		--rc genhtml_function_coverage=1
00:33:24.774  		--rc genhtml_legend=1
00:33:24.774  		--rc geninfo_all_blocks=1
00:33:24.774  		--rc geninfo_unexecuted_blocks=1
00:33:24.774  		
00:33:24.774  		'
00:33:24.774   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:33:24.774     19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:33:24.774     19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:33:24.774    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:33:24.774     19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob
00:33:24.774     19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:33:24.774     19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:33:24.774     19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:33:24.774      19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:24.774      19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:24.775      19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:24.775      19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH
00:33:24.775      19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:24.775    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0
00:33:24.775    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:33:24.775    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:33:24.775    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:33:24.775    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:33:24.775    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:33:24.775    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:33:24.775    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:33:24.775    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:33:24.775    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:33:24.775    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:33:24.775    19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:33:24.775  Cannot find device "nvmf_init_br"
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:33:24.775  Cannot find device "nvmf_init_br2"
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:33:24.775  Cannot find device "nvmf_tgt_br"
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:33:24.775  Cannot find device "nvmf_tgt_br2"
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:33:24.775  Cannot find device "nvmf_init_br"
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true
00:33:24.775   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:33:25.034  Cannot find device "nvmf_init_br2"
00:33:25.034   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true
00:33:25.034   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:33:25.034  Cannot find device "nvmf_tgt_br"
00:33:25.034   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true
00:33:25.034   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:33:25.034  Cannot find device "nvmf_tgt_br2"
00:33:25.034   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true
00:33:25.034   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:33:25.034  Cannot find device "nvmf_br"
00:33:25.034   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true
00:33:25.034   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:33:25.034  Cannot find device "nvmf_init_if"
00:33:25.034   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true
00:33:25.034   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:33:25.034  Cannot find device "nvmf_init_if2"
00:33:25.034   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true
00:33:25.034   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:33:25.034  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:33:25.034   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true
00:33:25.034   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:33:25.034  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:33:25.034   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:33:25.035   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:33:25.294  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:33:25.294  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms
00:33:25.294  
00:33:25.294  --- 10.0.0.3 ping statistics ---
00:33:25.294  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:33:25.294  rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:33:25.294  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:33:25.294  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms
00:33:25.294  
00:33:25.294  --- 10.0.0.4 ping statistics ---
00:33:25.294  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:33:25.294  rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:33:25.294  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:33:25.294  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms
00:33:25.294  
00:33:25.294  --- 10.0.0.1 ping statistics ---
00:33:25.294  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:33:25.294  rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:33:25.294  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:33:25.294  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms
00:33:25.294  
00:33:25.294  --- 10.0.0.2 ping statistics ---
00:33:25.294  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:33:25.294  rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=125340
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 125340
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 125340 ']'
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:33:25.294  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable
00:33:25.294   19:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:33:25.294  [2024-12-13 19:17:57.019497] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:33:25.294  [2024-12-13 19:17:57.020791] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:33:25.294  [2024-12-13 19:17:57.020861] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:33:25.553  [2024-12-13 19:17:57.175811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:33:25.553  [2024-12-13 19:17:57.220569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:33:25.553  [2024-12-13 19:17:57.220634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:33:25.553  [2024-12-13 19:17:57.220648] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:33:25.553  [2024-12-13 19:17:57.220658] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:33:25.553  [2024-12-13 19:17:57.220668] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:33:25.553  [2024-12-13 19:17:57.221934] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:33:25.553  [2024-12-13 19:17:57.222080] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:33:25.553  [2024-12-13 19:17:57.222238] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:33:25.553  [2024-12-13 19:17:57.222242] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:33:25.553  [2024-12-13 19:17:57.223024] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:33:25.553   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:33:25.553   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0
00:33:25.553   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:33:25.553   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable
00:33:25.553   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:33:25.553   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:33:25.553   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1
00:33:25.553   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:25.553   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:33:25.553   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:25.553   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init
00:33:25.553   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:25.553   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:33:25.812  [2024-12-13 19:17:57.404005] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:33:25.812  [2024-12-13 19:17:57.404247] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:33:25.812  [2024-12-13 19:17:57.405548] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:33:25.813  [2024-12-13 19:17:57.405902] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode.
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:33:25.813  [2024-12-13 19:17:57.415358] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:33:25.813  Malloc0
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:33:25.813  [2024-12-13 19:17:57.479588] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=125379
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256
00:33:25.813    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json
00:33:25.813    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=125381
00:33:25.813    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:33:25.813    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:33:25.813    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:33:25.813  {
00:33:25.813    "params": {
00:33:25.813      "name": "Nvme$subsystem",
00:33:25.813      "trtype": "$TEST_TRANSPORT",
00:33:25.813      "traddr": "$NVMF_FIRST_TARGET_IP",
00:33:25.813      "adrfam": "ipv4",
00:33:25.813      "trsvcid": "$NVMF_PORT",
00:33:25.813      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:33:25.813      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:33:25.813      "hdgst": ${hdgst:-false},
00:33:25.813      "ddgst": ${ddgst:-false}
00:33:25.813    },
00:33:25.813    "method": "bdev_nvme_attach_controller"
00:33:25.813  }
00:33:25.813  EOF
00:33:25.813  )")
00:33:25.813    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json
00:33:25.813    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256
00:33:25.813    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:33:25.813    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:33:25.813    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:33:25.813  {
00:33:25.813    "params": {
00:33:25.813      "name": "Nvme$subsystem",
00:33:25.813      "trtype": "$TEST_TRANSPORT",
00:33:25.813      "traddr": "$NVMF_FIRST_TARGET_IP",
00:33:25.813      "adrfam": "ipv4",
00:33:25.813      "trsvcid": "$NVMF_PORT",
00:33:25.813      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:33:25.813      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:33:25.813      "hdgst": ${hdgst:-false},
00:33:25.813      "ddgst": ${ddgst:-false}
00:33:25.813    },
00:33:25.813    "method": "bdev_nvme_attach_controller"
00:33:25.813  }
00:33:25.813  EOF
00:33:25.813  )")
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=125383
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256
00:33:25.813     19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:33:25.813     19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=125387
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync
00:33:25.813   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256
00:33:25.813    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json
00:33:25.813    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:33:25.813    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:33:25.813    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:33:25.813    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:33:25.813    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:33:25.813  {
00:33:25.813    "params": {
00:33:25.813      "name": "Nvme$subsystem",
00:33:25.813      "trtype": "$TEST_TRANSPORT",
00:33:25.813      "traddr": "$NVMF_FIRST_TARGET_IP",
00:33:25.813      "adrfam": "ipv4",
00:33:25.813      "trsvcid": "$NVMF_PORT",
00:33:25.813      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:33:25.813      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:33:25.813      "hdgst": ${hdgst:-false},
00:33:25.813      "ddgst": ${ddgst:-false}
00:33:25.813    },
00:33:25.813    "method": "bdev_nvme_attach_controller"
00:33:25.813  }
00:33:25.813  EOF
00:33:25.813  )")
00:33:25.813     19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:33:25.813     19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:33:25.813     19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:33:25.813    "params": {
00:33:25.813      "name": "Nvme1",
00:33:25.813      "trtype": "tcp",
00:33:25.813      "traddr": "10.0.0.3",
00:33:25.813      "adrfam": "ipv4",
00:33:25.813      "trsvcid": "4420",
00:33:25.813      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:33:25.813      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:33:25.813      "hdgst": false,
00:33:25.813      "ddgst": false
00:33:25.813    },
00:33:25.813    "method": "bdev_nvme_attach_controller"
00:33:25.813  }'
00:33:25.813    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:33:25.813    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json
00:33:25.813    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:33:25.813    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:33:25.813    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:33:25.813     19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:33:25.813     19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:33:25.813    "params": {
00:33:25.813      "name": "Nvme1",
00:33:25.813      "trtype": "tcp",
00:33:25.814      "traddr": "10.0.0.3",
00:33:25.814      "adrfam": "ipv4",
00:33:25.814      "trsvcid": "4420",
00:33:25.814      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:33:25.814      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:33:25.814      "hdgst": false,
00:33:25.814      "ddgst": false
00:33:25.814    },
00:33:25.814    "method": "bdev_nvme_attach_controller"
00:33:25.814  }'
00:33:25.814    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:33:25.814    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:33:25.814  {
00:33:25.814    "params": {
00:33:25.814      "name": "Nvme$subsystem",
00:33:25.814      "trtype": "$TEST_TRANSPORT",
00:33:25.814      "traddr": "$NVMF_FIRST_TARGET_IP",
00:33:25.814      "adrfam": "ipv4",
00:33:25.814      "trsvcid": "$NVMF_PORT",
00:33:25.814      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:33:25.814      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:33:25.814      "hdgst": ${hdgst:-false},
00:33:25.814      "ddgst": ${ddgst:-false}
00:33:25.814    },
00:33:25.814    "method": "bdev_nvme_attach_controller"
00:33:25.814  }
00:33:25.814  EOF
00:33:25.814  )")
00:33:25.814     19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:33:25.814     19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:33:25.814    "params": {
00:33:25.814      "name": "Nvme1",
00:33:25.814      "trtype": "tcp",
00:33:25.814      "traddr": "10.0.0.3",
00:33:25.814      "adrfam": "ipv4",
00:33:25.814      "trsvcid": "4420",
00:33:25.814      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:33:25.814      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:33:25.814      "hdgst": false,
00:33:25.814      "ddgst": false
00:33:25.814    },
00:33:25.814    "method": "bdev_nvme_attach_controller"
00:33:25.814  }'
00:33:25.814     19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:33:25.814    19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:33:25.814     19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:33:25.814     19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:33:25.814    "params": {
00:33:25.814      "name": "Nvme1",
00:33:25.814      "trtype": "tcp",
00:33:25.814      "traddr": "10.0.0.3",
00:33:25.814      "adrfam": "ipv4",
00:33:25.814      "trsvcid": "4420",
00:33:25.814      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:33:25.814      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:33:25.814      "hdgst": false,
00:33:25.814      "ddgst": false
00:33:25.814    },
00:33:25.814    "method": "bdev_nvme_attach_controller"
00:33:25.814  }'
00:33:25.814  [2024-12-13 19:17:57.549101] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:33:25.814  [2024-12-13 19:17:57.549189] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ]
00:33:25.814  [2024-12-13 19:17:57.550102] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:33:25.814  [2024-12-13 19:17:57.550337] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ]
00:33:25.814  [2024-12-13 19:17:57.553436] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:33:25.814  [2024-12-13 19:17:57.554211] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ]
00:33:25.814   19:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 125379
00:33:25.814  [2024-12-13 19:17:57.600419] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:33:25.814  [2024-12-13 19:17:57.600519] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ]
00:33:26.073  [2024-12-13 19:17:57.775272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:33:26.073  [2024-12-13 19:17:57.813661] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5
00:33:26.073  [2024-12-13 19:17:57.848983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:33:26.073  [2024-12-13 19:17:57.888499] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4
00:33:26.331  [2024-12-13 19:17:57.923885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:33:26.331  [2024-12-13 19:17:57.961125] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6
00:33:26.331  [2024-12-13 19:17:58.004825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:33:26.331  Running I/O for 1 seconds...
00:33:26.331  Running I/O for 1 seconds...
00:33:26.331  [2024-12-13 19:17:58.042335] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7
00:33:26.331  Running I/O for 1 seconds...
00:33:26.589  Running I/O for 1 seconds...
00:33:27.524      10198.00 IOPS,    39.84 MiB/s
[2024-12-13T19:17:59.348Z]      8440.00 IOPS,    32.97 MiB/s
00:33:27.524                                                                                                  Latency(us)
00:33:27.524  
[2024-12-13T19:17:59.348Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:33:27.524  Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096)
00:33:27.524  	 Nvme1n1             :       1.01   10245.63      40.02       0.00     0.00   12437.95    4081.11   14060.45
00:33:27.524  
[2024-12-13T19:17:59.348Z]  ===================================================================================================================
00:33:27.524  
[2024-12-13T19:17:59.348Z]  Total                       :              10245.63      40.02       0.00     0.00   12437.95    4081.11   14060.45
00:33:27.524  
00:33:27.524                                                                                                  Latency(us)
00:33:27.524  
[2024-12-13T19:17:59.348Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:33:27.524  Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096)
00:33:27.524  	 Nvme1n1             :       1.01    8514.25      33.26       0.00     0.00   14973.09    5987.61   24069.59
00:33:27.524  
[2024-12-13T19:17:59.348Z]  ===================================================================================================================
00:33:27.524  
[2024-12-13T19:17:59.348Z]  Total                       :               8514.25      33.26       0.00     0.00   14973.09    5987.61   24069.59
00:33:27.524     191800.00 IOPS,   749.22 MiB/s
00:33:27.524                                                                                                  Latency(us)
00:33:27.524  
[2024-12-13T19:17:59.348Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:33:27.524  Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096)
00:33:27.524  	 Nvme1n1             :       1.00  191450.75     747.85       0.00     0.00     664.88     283.00    1802.24
00:33:27.524  
[2024-12-13T19:17:59.348Z]  ===================================================================================================================
00:33:27.524  
[2024-12-13T19:17:59.348Z]  Total                       :             191450.75     747.85       0.00     0.00     664.88     283.00    1802.24
00:33:27.524       8937.00 IOPS,    34.91 MiB/s
00:33:27.525                                                                                                  Latency(us)
00:33:27.525  
[2024-12-13T19:17:59.349Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:33:27.525  Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096)
00:33:27.525  	 Nvme1n1             :       1.01    9028.12      35.27       0.00     0.00   14131.21    2517.18   20733.21
00:33:27.525  
[2024-12-13T19:17:59.349Z]  ===================================================================================================================
00:33:27.525  
[2024-12-13T19:17:59.349Z]  Total                       :               9028.12      35.27       0.00     0.00   14131.21    2517.18   20733.21
00:33:27.525   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 125381
00:33:27.525   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 125383
00:33:27.525   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 125387
00:33:27.525   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:33:27.525   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:27.525   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:33:27.525   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:27.525   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT
00:33:27.525   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini
00:33:27.525   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup
00:33:27.525   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync
00:33:27.783   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:33:27.783   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e
00:33:27.783   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20}
00:33:27.783   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:33:27.783  rmmod nvme_tcp
00:33:27.783  rmmod nvme_fabrics
00:33:27.783  rmmod nvme_keyring
00:33:27.783   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:33:27.783   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e
00:33:27.783   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0
00:33:27.783   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 125340 ']'
00:33:27.783   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 125340
00:33:27.783   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 125340 ']'
00:33:27.783   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 125340
00:33:27.783    19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname
00:33:27.783   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:33:27.783    19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125340
00:33:27.783   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:33:27.783   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:33:27.783   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125340'
00:33:27.783  killing process with pid 125340
00:33:27.783   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 125340
00:33:27.783   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 125340
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:33:28.042    19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0
00:33:28.042  
00:33:28.042  real	0m3.527s
00:33:28.042  user	0m12.404s
00:33:28.042  sys	0m2.588s
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable
00:33:28.042   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:33:28.042  ************************************
00:33:28.042  END TEST nvmf_bdev_io_wait
00:33:28.042  ************************************
00:33:28.302   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode
00:33:28.302   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:33:28.302   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:33:28.302   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:33:28.302  ************************************
00:33:28.302  START TEST nvmf_queue_depth
00:33:28.302  ************************************
00:33:28.302   19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode
00:33:28.302  * Looking for test storage...
00:33:28.302  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:33:28.302    19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:33:28.302     19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:33:28.302     19:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version
00:33:28.302    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:33:28.302    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:33:28.302    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l
00:33:28.302    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l
00:33:28.302    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-:
00:33:28.302    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1
00:33:28.302    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-:
00:33:28.302    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2
00:33:28.302    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<'
00:33:28.302    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2
00:33:28.302    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1
00:33:28.302    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:33:28.302    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in
00:33:28.302    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1
00:33:28.302    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 ))
00:33:28.302    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:33:28.302     19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1
00:33:28.302     19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1
00:33:28.302     19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:33:28.302     19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1
00:33:28.302    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1
00:33:28.302     19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2
00:33:28.302     19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2
00:33:28.302     19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:33:28.302     19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2
00:33:28.302    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2
00:33:28.302    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:33:28.302    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:33:28.302    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0
00:33:28.302    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:33:28.302    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:33:28.302  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:28.303  		--rc genhtml_branch_coverage=1
00:33:28.303  		--rc genhtml_function_coverage=1
00:33:28.303  		--rc genhtml_legend=1
00:33:28.303  		--rc geninfo_all_blocks=1
00:33:28.303  		--rc geninfo_unexecuted_blocks=1
00:33:28.303  		
00:33:28.303  		'
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:33:28.303  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:28.303  		--rc genhtml_branch_coverage=1
00:33:28.303  		--rc genhtml_function_coverage=1
00:33:28.303  		--rc genhtml_legend=1
00:33:28.303  		--rc geninfo_all_blocks=1
00:33:28.303  		--rc geninfo_unexecuted_blocks=1
00:33:28.303  		
00:33:28.303  		'
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:33:28.303  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:28.303  		--rc genhtml_branch_coverage=1
00:33:28.303  		--rc genhtml_function_coverage=1
00:33:28.303  		--rc genhtml_legend=1
00:33:28.303  		--rc geninfo_all_blocks=1
00:33:28.303  		--rc geninfo_unexecuted_blocks=1
00:33:28.303  		
00:33:28.303  		'
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:33:28.303  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:28.303  		--rc genhtml_branch_coverage=1
00:33:28.303  		--rc genhtml_function_coverage=1
00:33:28.303  		--rc genhtml_legend=1
00:33:28.303  		--rc geninfo_all_blocks=1
00:33:28.303  		--rc geninfo_unexecuted_blocks=1
00:33:28.303  		
00:33:28.303  		'
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:33:28.303     19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:33:28.303     19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:33:28.303     19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob
00:33:28.303     19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:33:28.303     19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:33:28.303     19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:33:28.303      19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:28.303      19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:28.303      19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:28.303      19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH
00:33:28.303      19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:33:28.303    19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:33:28.303   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:33:28.304   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:33:28.304   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:33:28.304   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:33:28.304   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:33:28.304   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:33:28.304   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:33:28.304   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:33:28.304   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:33:28.304  Cannot find device "nvmf_init_br"
00:33:28.304   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # true
00:33:28.304   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:33:28.304  Cannot find device "nvmf_init_br2"
00:33:28.304   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # true
00:33:28.304   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:33:28.562  Cannot find device "nvmf_tgt_br"
00:33:28.562   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # true
00:33:28.562   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:33:28.562  Cannot find device "nvmf_tgt_br2"
00:33:28.562   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # true
00:33:28.562   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:33:28.562  Cannot find device "nvmf_init_br"
00:33:28.562   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # true
00:33:28.562   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:33:28.562  Cannot find device "nvmf_init_br2"
00:33:28.562   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # true
00:33:28.562   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:33:28.562  Cannot find device "nvmf_tgt_br"
00:33:28.562   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # true
00:33:28.562   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:33:28.562  Cannot find device "nvmf_tgt_br2"
00:33:28.562   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # true
00:33:28.562   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:33:28.562  Cannot find device "nvmf_br"
00:33:28.562   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # true
00:33:28.562   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:33:28.562  Cannot find device "nvmf_init_if"
00:33:28.562   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # true
00:33:28.562   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:33:28.562  Cannot find device "nvmf_init_if2"
00:33:28.562   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # true
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:33:28.563  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # true
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:33:28.563  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # true
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:33:28.563   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:33:28.822  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:33:28.822  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms
00:33:28.822  
00:33:28.822  --- 10.0.0.3 ping statistics ---
00:33:28.822  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:33:28.822  rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:33:28.822  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:33:28.822  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms
00:33:28.822  
00:33:28.822  --- 10.0.0.4 ping statistics ---
00:33:28.822  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:33:28.822  rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:33:28.822  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:33:28.822  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms
00:33:28.822  
00:33:28.822  --- 10.0.0.1 ping statistics ---
00:33:28.822  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:33:28.822  rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:33:28.822  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:33:28.822  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms
00:33:28.822  
00:33:28.822  --- 10.0.0.2 ping statistics ---
00:33:28.822  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:33:28.822  rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=125640
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 125640
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 125640 ']'
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100
00:33:28.822  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable
00:33:28.822   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:33:28.822  [2024-12-13 19:18:00.538631] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:33:28.822  [2024-12-13 19:18:00.539654] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:33:28.822  [2024-12-13 19:18:00.539715] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:33:29.081  [2024-12-13 19:18:00.693417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:33:29.081  [2024-12-13 19:18:00.734685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:33:29.081  [2024-12-13 19:18:00.734737] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:33:29.081  [2024-12-13 19:18:00.734750] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:33:29.081  [2024-12-13 19:18:00.734761] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:33:29.081  [2024-12-13 19:18:00.734771] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:33:29.081  [2024-12-13 19:18:00.735173] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:33:29.081  [2024-12-13 19:18:00.828925] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:33:29.081  [2024-12-13 19:18:00.829297] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:33:29.081   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:33:29.081   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0
00:33:29.081   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:33:29.081   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable
00:33:29.081   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:33:29.081   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:33:29.081   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:33:29.081   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:33:29.340  [2024-12-13 19:18:00.912023] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:33:29.340  Malloc0
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:33:29.340  [2024-12-13 19:18:00.976079] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=125677
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 125677 /var/tmp/bdevperf.sock
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 125677 ']'
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:33:29.340  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable
00:33:29.340   19:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:33:29.340  [2024-12-13 19:18:01.044420] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:33:29.340  [2024-12-13 19:18:01.044512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125677 ]
00:33:29.599  [2024-12-13 19:18:01.198058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:33:29.599  [2024-12-13 19:18:01.240175] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:33:29.599   19:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:33:29.599   19:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0
00:33:29.599   19:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:33:29.599   19:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:29.599   19:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:33:29.858  NVMe0n1
00:33:29.858   19:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:29.858   19:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:33:29.858  Running I/O for 10 seconds...
00:33:32.168       9837.00 IOPS,    38.43 MiB/s
[2024-12-13T19:18:04.928Z]     10201.00 IOPS,    39.85 MiB/s
[2024-12-13T19:18:05.901Z]     10277.67 IOPS,    40.15 MiB/s
[2024-12-13T19:18:06.851Z]     10477.00 IOPS,    40.93 MiB/s
[2024-12-13T19:18:07.788Z]     10600.40 IOPS,    41.41 MiB/s
[2024-12-13T19:18:08.725Z]     10642.00 IOPS,    41.57 MiB/s
[2024-12-13T19:18:09.662Z]     10695.00 IOPS,    41.78 MiB/s
[2024-12-13T19:18:10.599Z]     10743.75 IOPS,    41.97 MiB/s
[2024-12-13T19:18:11.977Z]     10769.44 IOPS,    42.07 MiB/s
[2024-12-13T19:18:11.977Z]     10798.20 IOPS,    42.18 MiB/s
00:33:40.153                                                                                                  Latency(us)
00:33:40.153  
[2024-12-13T19:18:11.977Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:33:40.153  Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096)
00:33:40.153  	 Verification LBA range: start 0x0 length 0x4000
00:33:40.153  	 NVMe0n1             :      10.06   10833.54      42.32       0.00     0.00   94153.23   18350.08   65774.31
00:33:40.153  
[2024-12-13T19:18:11.977Z]  ===================================================================================================================
00:33:40.153  
[2024-12-13T19:18:11.977Z]  Total                       :              10833.54      42.32       0.00     0.00   94153.23   18350.08   65774.31
00:33:40.153  {
00:33:40.153    "results": [
00:33:40.153      {
00:33:40.153        "job": "NVMe0n1",
00:33:40.153        "core_mask": "0x1",
00:33:40.153        "workload": "verify",
00:33:40.153        "status": "finished",
00:33:40.153        "verify_range": {
00:33:40.153          "start": 0,
00:33:40.153          "length": 16384
00:33:40.153        },
00:33:40.153        "queue_depth": 1024,
00:33:40.153        "io_size": 4096,
00:33:40.153        "runtime": 10.060607,
00:33:40.153        "iops": 10833.541157109108,
00:33:40.153        "mibps": 42.31852014495745,
00:33:40.153        "io_failed": 0,
00:33:40.153        "io_timeout": 0,
00:33:40.153        "avg_latency_us": 94153.2272332248,
00:33:40.153        "min_latency_us": 18350.08,
00:33:40.153        "max_latency_us": 65774.31272727273
00:33:40.153      }
00:33:40.153    ],
00:33:40.153    "core_count": 1
00:33:40.153  }
00:33:40.153   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 125677
00:33:40.153   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 125677 ']'
00:33:40.153   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 125677
00:33:40.153    19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname
00:33:40.153   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:33:40.153    19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125677
00:33:40.153   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:33:40.153   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:33:40.153  killing process with pid 125677
00:33:40.153   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125677'
00:33:40.153  Received shutdown signal, test time was about 10.000000 seconds
00:33:40.153  
00:33:40.153                                                                                                  Latency(us)
00:33:40.153  
[2024-12-13T19:18:11.977Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:33:40.153  
[2024-12-13T19:18:11.977Z]  ===================================================================================================================
00:33:40.153  
[2024-12-13T19:18:11.977Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:33:40.153   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 125677
00:33:40.153   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 125677
00:33:40.153   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT
00:33:40.153   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini
00:33:40.153   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup
00:33:40.153   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync
00:33:40.153   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:33:40.153   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e
00:33:40.153   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20}
00:33:40.153   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:33:40.153  rmmod nvme_tcp
00:33:40.153  rmmod nvme_fabrics
00:33:40.153  rmmod nvme_keyring
00:33:40.153   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:33:40.153   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e
00:33:40.153   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0
00:33:40.153   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 125640 ']'
00:33:40.153   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 125640
00:33:40.153   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 125640 ']'
00:33:40.154   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 125640
00:33:40.154    19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname
00:33:40.413   19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:33:40.413    19:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125640
00:33:40.413   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:33:40.413   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:33:40.413  killing process with pid 125640
00:33:40.413   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125640'
00:33:40.413   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 125640
00:33:40.413   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 125640
00:33:40.413   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:33:40.413   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:33:40.413   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:33:40.413   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr
00:33:40.413   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save
00:33:40.413   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:33:40.413   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore
00:33:40.413   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:33:40.413   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:33:40.413   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:33:40.672   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:33:40.672   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:33:40.672   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:33:40.672   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:33:40.672   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:33:40.672   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:33:40.672   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:33:40.672   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:33:40.672   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:33:40.672   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:33:40.672   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:33:40.672   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:33:40.672   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns
00:33:40.672   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:33:40.672   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:33:40.672    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:33:40.672   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0
00:33:40.672  
00:33:40.672  real	0m12.545s
00:33:40.672  user	0m20.892s
00:33:40.672  sys	0m2.251s
00:33:40.672   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable
00:33:40.672  ************************************
00:33:40.672  END TEST nvmf_queue_depth
00:33:40.672  ************************************
00:33:40.672   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:33:40.672   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode
00:33:40.672   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:33:40.672   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:33:40.672   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:33:40.672  ************************************
00:33:40.672  START TEST nvmf_target_multipath
00:33:40.672  ************************************
00:33:40.672   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode
00:33:40.933  * Looking for test storage...
00:33:40.933  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:33:40.933     19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version
00:33:40.933     19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-:
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-:
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<'
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 ))
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:33:40.933     19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1
00:33:40.933     19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1
00:33:40.933     19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:33:40.933     19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1
00:33:40.933     19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2
00:33:40.933     19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2
00:33:40.933     19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:33:40.933     19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:33:40.933  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:40.933  		--rc genhtml_branch_coverage=1
00:33:40.933  		--rc genhtml_function_coverage=1
00:33:40.933  		--rc genhtml_legend=1
00:33:40.933  		--rc geninfo_all_blocks=1
00:33:40.933  		--rc geninfo_unexecuted_blocks=1
00:33:40.933  		
00:33:40.933  		'
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:33:40.933  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:40.933  		--rc genhtml_branch_coverage=1
00:33:40.933  		--rc genhtml_function_coverage=1
00:33:40.933  		--rc genhtml_legend=1
00:33:40.933  		--rc geninfo_all_blocks=1
00:33:40.933  		--rc geninfo_unexecuted_blocks=1
00:33:40.933  		
00:33:40.933  		'
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:33:40.933  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:40.933  		--rc genhtml_branch_coverage=1
00:33:40.933  		--rc genhtml_function_coverage=1
00:33:40.933  		--rc genhtml_legend=1
00:33:40.933  		--rc geninfo_all_blocks=1
00:33:40.933  		--rc geninfo_unexecuted_blocks=1
00:33:40.933  		
00:33:40.933  		'
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:33:40.933  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:40.933  		--rc genhtml_branch_coverage=1
00:33:40.933  		--rc genhtml_function_coverage=1
00:33:40.933  		--rc genhtml_legend=1
00:33:40.933  		--rc geninfo_all_blocks=1
00:33:40.933  		--rc geninfo_unexecuted_blocks=1
00:33:40.933  		
00:33:40.933  		'
00:33:40.933   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:33:40.933     19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:33:40.933     19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:33:40.933    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:33:40.933     19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob
00:33:40.933     19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:33:40.933     19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:33:40.933     19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:33:40.934      19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:40.934      19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:40.934      19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:40.934      19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH
00:33:40.934      19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:40.934    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0
00:33:40.934    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:33:40.934    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:33:40.934    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:33:40.934    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:33:40.934    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:33:40.934    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:33:40.934    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:33:40.934    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:33:40.934    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:33:40.934    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:33:40.934    19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:33:40.934  Cannot find device "nvmf_init_br"
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # true
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:33:40.934  Cannot find device "nvmf_init_br2"
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # true
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:33:40.934  Cannot find device "nvmf_tgt_br"
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # true
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:33:40.934  Cannot find device "nvmf_tgt_br2"
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # true
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:33:40.934  Cannot find device "nvmf_init_br"
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # true
00:33:40.934   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:33:41.194  Cannot find device "nvmf_init_br2"
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # true
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:33:41.194  Cannot find device "nvmf_tgt_br"
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # true
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:33:41.194  Cannot find device "nvmf_tgt_br2"
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # true
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:33:41.194  Cannot find device "nvmf_br"
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # true
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:33:41.194  Cannot find device "nvmf_init_if"
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # true
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:33:41.194  Cannot find device "nvmf_init_if2"
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # true
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:33:41.194  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # true
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:33:41.194  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # true
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:33:41.194   19:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:33:41.194   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:33:41.453  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:33:41.453  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms
00:33:41.453  
00:33:41.453  --- 10.0.0.3 ping statistics ---
00:33:41.453  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:33:41.453  rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:33:41.453  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:33:41.453  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms
00:33:41.453  
00:33:41.453  --- 10.0.0.4 ping statistics ---
00:33:41.453  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:33:41.453  rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:33:41.453  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:33:41.453  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms
00:33:41.453  
00:33:41.453  --- 10.0.0.1 ping statistics ---
00:33:41.453  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:33:41.453  rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:33:41.453  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:33:41.453  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms
00:33:41.453  
00:33:41.453  --- 10.0.0.2 ping statistics ---
00:33:41.453  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:33:41.453  rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']'
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']'
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=126038
00:33:41.453   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF
00:33:41.454   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 126038
00:33:41.454   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 126038 ']'
00:33:41.454   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:33:41.454   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100
00:33:41.454  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:33:41.454   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:33:41.454   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable
00:33:41.454   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x
00:33:41.454  [2024-12-13 19:18:13.174854] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:33:41.454  [2024-12-13 19:18:13.176081] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:33:41.454  [2024-12-13 19:18:13.176151] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:33:41.713  [2024-12-13 19:18:13.326736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:33:41.713  [2024-12-13 19:18:13.370723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:33:41.713  [2024-12-13 19:18:13.370782] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:33:41.713  [2024-12-13 19:18:13.370796] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:33:41.713  [2024-12-13 19:18:13.370807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:33:41.713  [2024-12-13 19:18:13.370816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:33:41.713  [2024-12-13 19:18:13.372120] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:33:41.713  [2024-12-13 19:18:13.372677] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:33:41.713  [2024-12-13 19:18:13.372754] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:33:41.713  [2024-12-13 19:18:13.372757] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:33:41.713  [2024-12-13 19:18:13.471809] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:33:41.713  [2024-12-13 19:18:13.472162] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:33:41.713  [2024-12-13 19:18:13.472802] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:33:41.713  [2024-12-13 19:18:13.473198] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:33:41.713  [2024-12-13 19:18:13.473637] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode.
00:33:41.713   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:33:41.713   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0
00:33:41.713   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:33:41.713   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable
00:33:41.713   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x
00:33:41.972   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:33:41.972   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:33:42.231  [2024-12-13 19:18:13.825930] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:33:42.231   19:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0
00:33:42.490  Malloc0
00:33:42.490   19:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r
00:33:42.749   19:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:33:43.008   19:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:33:43.267  [2024-12-13 19:18:14.846002] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:33:43.267   19:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420
00:33:43.525  [2024-12-13 19:18:15.137921] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 ***
00:33:43.525   19:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G
00:33:43.525   19:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G
00:33:43.783   19:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME
00:33:43.783   19:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0
00:33:43.783   19:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:33:43.783   19:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:33:43.783   19:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:33:45.688    19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:33:45.688    19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0
00:33:45.688    19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME
00:33:45.688    19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s
00:33:45.688    19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/*
00:33:45.688    19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]]
00:33:45.688    19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]]
00:33:45.688    19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0
00:33:45.688    19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*)
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}")
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 ))
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]]
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]]
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]]
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=126161
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1
00:33:45.688   19:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v
00:33:45.688  [global]
00:33:45.688  thread=1
00:33:45.688  invalidate=1
00:33:45.688  rw=randrw
00:33:45.688  time_based=1
00:33:45.688  runtime=6
00:33:45.688  ioengine=libaio
00:33:45.688  direct=1
00:33:45.688  bs=4096
00:33:45.688  iodepth=128
00:33:45.688  norandommap=0
00:33:45.688  numjobs=1
00:33:45.688  
00:33:45.688  verify_dump=1
00:33:45.688  verify_backlog=512
00:33:45.688  verify_state_save=0
00:33:45.688  do_verify=1
00:33:45.688  verify=crc32c-intel
00:33:45.688  [job0]
00:33:45.688  filename=/dev/nvme0n1
00:33:45.688  Could not set queue depth (nvme0n1)
00:33:45.947  job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:33:45.947  fio-3.35
00:33:45.947  Starting 1 thread
00:33:46.883   19:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible
00:33:47.142   19:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized
00:33:47.401   19:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible
00:33:47.401   19:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible
00:33:47.401   19:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:33:47.401   19:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state
00:33:47.401   19:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]]
00:33:47.401   19:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]]
00:33:47.401   19:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized
00:33:47.401   19:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized
00:33:47.401   19:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:33:47.401   19:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state
00:33:47.401   19:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:33:47.401   19:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]]
00:33:47.401   19:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s
00:33:48.340   19:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 ))
00:33:48.340   19:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:33:48.340   19:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]]
00:33:48.340   19:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized
00:33:48.622   19:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible
00:33:48.888   19:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized
00:33:48.888   19:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized
00:33:48.889   19:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:33:48.889   19:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state
00:33:48.889   19:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]]
00:33:48.889   19:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]]
00:33:48.889   19:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible
00:33:48.889   19:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible
00:33:48.889   19:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:33:48.889   19:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state
00:33:48.889   19:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:33:48.889   19:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]]
00:33:48.889   19:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s
00:33:50.266   19:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 ))
00:33:50.266   19:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:33:50.266   19:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]]
00:33:50.266   19:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 126161
00:33:52.170  
00:33:52.170  job0: (groupid=0, jobs=1): err= 0: pid=126182: Fri Dec 13 19:18:23 2024
00:33:52.170    read: IOPS=11.5k, BW=44.8MiB/s (46.9MB/s)(269MiB/6003msec)
00:33:52.170      slat (usec): min=3, max=5341, avg=48.53, stdev=236.44
00:33:52.170      clat (usec): min=1175, max=54053, avg=7478.96, stdev=2629.69
00:33:52.170       lat (usec): min=1208, max=54063, avg=7527.49, stdev=2635.87
00:33:52.170      clat percentiles (usec):
00:33:52.170       |  1.00th=[ 4359],  5.00th=[ 5342], 10.00th=[ 5866], 20.00th=[ 6390],
00:33:52.170       | 30.00th=[ 6652], 40.00th=[ 6915], 50.00th=[ 7177], 60.00th=[ 7504],
00:33:52.170       | 70.00th=[ 7832], 80.00th=[ 8291], 90.00th=[ 9110], 95.00th=[10028],
00:33:52.170       | 99.00th=[12256], 99.50th=[13435], 99.90th=[49546], 99.95th=[50594],
00:33:52.170       | 99.99th=[53216]
00:33:52.170     bw (  KiB/s): min=12936, max=28888, per=52.43%, avg=24027.64, stdev=5039.14, samples=11
00:33:52.170     iops        : min= 3234, max= 7222, avg=6006.91, stdev=1259.79, samples=11
00:33:52.170    write: IOPS=6676, BW=26.1MiB/s (27.3MB/s)(142MiB/5442msec); 0 zone resets
00:33:52.170      slat (usec): min=5, max=6684, avg=60.40, stdev=141.05
00:33:52.170      clat (usec): min=689, max=53206, avg=6788.15, stdev=2112.03
00:33:52.170       lat (usec): min=870, max=53231, avg=6848.56, stdev=2116.18
00:33:52.170      clat percentiles (usec):
00:33:52.171       |  1.00th=[ 3654],  5.00th=[ 4817], 10.00th=[ 5473], 20.00th=[ 5932],
00:33:52.171       | 30.00th=[ 6259], 40.00th=[ 6456], 50.00th=[ 6652], 60.00th=[ 6915],
00:33:52.171       | 70.00th=[ 7177], 80.00th=[ 7504], 90.00th=[ 7963], 95.00th=[ 8455],
00:33:52.171       | 99.00th=[11076], 99.50th=[12780], 99.90th=[48497], 99.95th=[51119],
00:33:52.171       | 99.99th=[52167]
00:33:52.171     bw (  KiB/s): min=13600, max=28672, per=89.99%, avg=24032.00, stdev=4865.52, samples=11
00:33:52.171     iops        : min= 3400, max= 7168, avg=6008.00, stdev=1216.38, samples=11
00:33:52.171    lat (usec)   : 750=0.01%, 1000=0.01%
00:33:52.171    lat (msec)   : 2=0.08%, 4=0.86%, 10=94.93%, 20=3.83%, 50=0.22%
00:33:52.171    lat (msec)   : 100=0.06%
00:33:52.171    cpu          : usr=6.16%, sys=24.56%, ctx=8276, majf=0, minf=78
00:33:52.171    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7%
00:33:52.171       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:33:52.171       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:33:52.171       issued rwts: total=68778,36334,0,0 short=0,0,0,0 dropped=0,0,0,0
00:33:52.171       latency   : target=0, window=0, percentile=100.00%, depth=128
00:33:52.171  
00:33:52.171  Run status group 0 (all jobs):
00:33:52.171     READ: bw=44.8MiB/s (46.9MB/s), 44.8MiB/s-44.8MiB/s (46.9MB/s-46.9MB/s), io=269MiB (282MB), run=6003-6003msec
00:33:52.171    WRITE: bw=26.1MiB/s (27.3MB/s), 26.1MiB/s-26.1MiB/s (27.3MB/s-27.3MB/s), io=142MiB (149MB), run=5442-5442msec
00:33:52.171  
00:33:52.171  Disk stats (read/write):
00:33:52.171    nvme0n1: ios=67730/35701, merge=0/0, ticks=465094/227210, in_queue=692304, util=98.55%
00:33:52.171   19:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized
00:33:52.171   19:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized
00:33:52.739   19:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized
00:33:52.739   19:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized
00:33:52.739   19:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:33:52.739   19:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state
00:33:52.739   19:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]]
00:33:52.739   19:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]]
00:33:52.739   19:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized
00:33:52.739   19:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized
00:33:52.739   19:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:33:52.739   19:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state
00:33:52.739   19:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:33:52.739   19:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]]
00:33:52.739   19:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s
00:33:53.675   19:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 ))
00:33:53.675   19:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:33:53.675   19:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]]
00:33:53.675   19:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin
00:33:53.675   19:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=126301
00:33:53.675   19:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v
00:33:53.675   19:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1
00:33:53.675  [global]
00:33:53.675  thread=1
00:33:53.675  invalidate=1
00:33:53.675  rw=randrw
00:33:53.675  time_based=1
00:33:53.675  runtime=6
00:33:53.675  ioengine=libaio
00:33:53.675  direct=1
00:33:53.675  bs=4096
00:33:53.675  iodepth=128
00:33:53.675  norandommap=0
00:33:53.676  numjobs=1
00:33:53.676  
00:33:53.676  verify_dump=1
00:33:53.676  verify_backlog=512
00:33:53.676  verify_state_save=0
00:33:53.676  do_verify=1
00:33:53.676  verify=crc32c-intel
00:33:53.676  [job0]
00:33:53.676  filename=/dev/nvme0n1
00:33:53.676  Could not set queue depth (nvme0n1)
00:33:53.676  job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:33:53.676  fio-3.35
00:33:53.676  Starting 1 thread
00:33:54.610   19:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible
00:33:54.869   19:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized
00:33:55.128   19:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible
00:33:55.128   19:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible
00:33:55.128   19:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:33:55.128   19:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state
00:33:55.128   19:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]]
00:33:55.128   19:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]]
00:33:55.128   19:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized
00:33:55.128   19:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized
00:33:55.128   19:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:33:55.128   19:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state
00:33:55.128   19:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:33:55.128   19:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]]
00:33:55.128   19:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s
00:33:56.066   19:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 ))
00:33:56.066   19:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:33:56.066   19:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]]
00:33:56.066   19:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized
00:33:56.634   19:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible
00:33:56.891   19:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized
00:33:56.891   19:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized
00:33:56.891   19:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:33:56.891   19:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state
00:33:56.891   19:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]]
00:33:56.891   19:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]]
00:33:56.891   19:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible
00:33:56.891   19:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible
00:33:56.891   19:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20
00:33:56.891   19:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state
00:33:56.891   19:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:33:56.891   19:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]]
00:33:56.891   19:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s
00:33:57.827   19:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 ))
00:33:57.827   19:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]]
00:33:57.827   19:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]]
00:33:57.827   19:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 126301
00:33:59.732  
00:33:59.732  job0: (groupid=0, jobs=1): err= 0: pid=126328: Fri Dec 13 19:18:31 2024
00:33:59.732    read: IOPS=12.1k, BW=47.4MiB/s (49.7MB/s)(285MiB/6003msec)
00:33:59.732      slat (usec): min=5, max=7205, avg=40.53, stdev=218.58
00:33:59.732      clat (usec): min=639, max=19407, avg=7142.03, stdev=1749.75
00:33:59.732       lat (usec): min=669, max=19416, avg=7182.56, stdev=1764.12
00:33:59.732      clat percentiles (usec):
00:33:59.732       |  1.00th=[ 2835],  5.00th=[ 4047], 10.00th=[ 4883], 20.00th=[ 5932],
00:33:59.732       | 30.00th=[ 6456], 40.00th=[ 6849], 50.00th=[ 7177], 60.00th=[ 7439],
00:33:59.732       | 70.00th=[ 7832], 80.00th=[ 8356], 90.00th=[ 9110], 95.00th=[10028],
00:33:59.732       | 99.00th=[11731], 99.50th=[12518], 99.90th=[15270], 99.95th=[16909],
00:33:59.732       | 99.99th=[19006]
00:33:59.732     bw (  KiB/s): min=11776, max=40352, per=53.21%, avg=25823.27, stdev=7689.48, samples=11
00:33:59.732     iops        : min= 2944, max=10088, avg=6455.73, stdev=1922.39, samples=11
00:33:59.732    write: IOPS=7155, BW=28.0MiB/s (29.3MB/s)(148MiB/5293msec); 0 zone resets
00:33:59.732      slat (usec): min=11, max=2610, avg=52.16, stdev=122.02
00:33:59.732      clat (usec): min=383, max=16816, avg=6381.47, stdev=1563.80
00:33:59.732       lat (usec): min=464, max=16854, avg=6433.64, stdev=1574.87
00:33:59.732      clat percentiles (usec):
00:33:59.732       |  1.00th=[ 2638],  5.00th=[ 3458], 10.00th=[ 4080], 20.00th=[ 5080],
00:33:59.732       | 30.00th=[ 5866], 40.00th=[ 6325], 50.00th=[ 6587], 60.00th=[ 6915],
00:33:59.732       | 70.00th=[ 7177], 80.00th=[ 7504], 90.00th=[ 8029], 95.00th=[ 8455],
00:33:59.732       | 99.00th=[10159], 99.50th=[11469], 99.90th=[14091], 99.95th=[14746],
00:33:59.732       | 99.99th=[16581]
00:33:59.732     bw (  KiB/s): min=12424, max=40848, per=90.14%, avg=25801.27, stdev=7558.68, samples=11
00:33:59.732     iops        : min= 3106, max=10212, avg=6450.27, stdev=1889.68, samples=11
00:33:59.732    lat (usec)   : 500=0.01%, 750=0.01%, 1000=0.01%
00:33:59.732    lat (msec)   : 2=0.17%, 4=6.11%, 10=89.99%, 20=3.72%
00:33:59.732    cpu          : usr=5.83%, sys=24.56%, ctx=8788, majf=0, minf=90
00:33:59.732    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7%
00:33:59.732       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:33:59.732       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:33:59.732       issued rwts: total=72833,37874,0,0 short=0,0,0,0 dropped=0,0,0,0
00:33:59.732       latency   : target=0, window=0, percentile=100.00%, depth=128
00:33:59.733  
00:33:59.733  Run status group 0 (all jobs):
00:33:59.733     READ: bw=47.4MiB/s (49.7MB/s), 47.4MiB/s-47.4MiB/s (49.7MB/s-49.7MB/s), io=285MiB (298MB), run=6003-6003msec
00:33:59.733    WRITE: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=148MiB (155MB), run=5293-5293msec
00:33:59.733  
00:33:59.733  Disk stats (read/write):
00:33:59.733    nvme0n1: ios=71180/37874, merge=0/0, ticks=476809/230138, in_queue=706947, util=98.66%
00:33:59.992   19:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:33:59.992  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s)
00:33:59.992   19:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:33:59.992   19:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0
00:33:59.992   19:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:33:59.992   19:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:33:59.992   19:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:33:59.992   19:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:33:59.992   19:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0
00:33:59.992   19:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:34:00.251   19:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state
00:34:00.251   19:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state
00:34:00.251   19:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT
00:34:00.251   19:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini
00:34:00.251   19:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup
00:34:00.251   19:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync
00:34:00.251   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:34:00.251   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e
00:34:00.251   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20}
00:34:00.251   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:34:00.251  rmmod nvme_tcp
00:34:00.251  rmmod nvme_fabrics
00:34:00.251  rmmod nvme_keyring
00:34:00.251   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:34:00.251   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e
00:34:00.251   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0
00:34:00.251   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 126038 ']'
00:34:00.251   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 126038
00:34:00.251   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 126038 ']'
00:34:00.251   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 126038
00:34:00.251    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname
00:34:00.251   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:34:00.251    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126038
00:34:00.510  killing process with pid 126038
00:34:00.511   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:34:00.511   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:34:00.511   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126038'
00:34:00.511   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 126038
00:34:00.511   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 126038
00:34:00.770   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:34:00.770   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:34:00.770   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:34:00.770   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr
00:34:00.770   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save
00:34:00.770   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore
00:34:00.770   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:34:00.770   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:34:00.770   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:34:00.770   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:34:00.770   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:34:00.770   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:34:00.770   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:34:00.770   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:34:00.770   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:34:00.770   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:34:00.770   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:34:00.770   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:34:00.770   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:34:00.770   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:34:00.770   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:34:01.029   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:34:01.029   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns
00:34:01.029   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:34:01.029   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:34:01.029    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:34:01.029   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0
00:34:01.029  
00:34:01.029  real	0m20.158s
00:34:01.029  user	1m10.366s
00:34:01.029  sys	0m9.024s
00:34:01.029   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable
00:34:01.029  ************************************
00:34:01.029  END TEST nvmf_target_multipath
00:34:01.029  ************************************
00:34:01.029   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x
00:34:01.029   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode
00:34:01.029   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:34:01.029   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:34:01.029   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:34:01.029  ************************************
00:34:01.029  START TEST nvmf_zcopy
00:34:01.029  ************************************
00:34:01.029   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode
00:34:01.029  * Looking for test storage...
00:34:01.029  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:34:01.029    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:34:01.029     19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:34:01.029     19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-:
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-:
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<'
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 ))
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:34:01.289     19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1
00:34:01.289     19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1
00:34:01.289     19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:34:01.289     19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1
00:34:01.289     19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2
00:34:01.289     19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2
00:34:01.289     19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:34:01.289     19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:34:01.289  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:01.289  		--rc genhtml_branch_coverage=1
00:34:01.289  		--rc genhtml_function_coverage=1
00:34:01.289  		--rc genhtml_legend=1
00:34:01.289  		--rc geninfo_all_blocks=1
00:34:01.289  		--rc geninfo_unexecuted_blocks=1
00:34:01.289  		
00:34:01.289  		'
00:34:01.289    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:34:01.289  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:01.289  		--rc genhtml_branch_coverage=1
00:34:01.289  		--rc genhtml_function_coverage=1
00:34:01.290  		--rc genhtml_legend=1
00:34:01.290  		--rc geninfo_all_blocks=1
00:34:01.290  		--rc geninfo_unexecuted_blocks=1
00:34:01.290  		
00:34:01.290  		'
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:34:01.290  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:01.290  		--rc genhtml_branch_coverage=1
00:34:01.290  		--rc genhtml_function_coverage=1
00:34:01.290  		--rc genhtml_legend=1
00:34:01.290  		--rc geninfo_all_blocks=1
00:34:01.290  		--rc geninfo_unexecuted_blocks=1
00:34:01.290  		
00:34:01.290  		'
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:34:01.290  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:01.290  		--rc genhtml_branch_coverage=1
00:34:01.290  		--rc genhtml_function_coverage=1
00:34:01.290  		--rc genhtml_legend=1
00:34:01.290  		--rc geninfo_all_blocks=1
00:34:01.290  		--rc geninfo_unexecuted_blocks=1
00:34:01.290  		
00:34:01.290  		'
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:34:01.290     19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:34:01.290     19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:34:01.290     19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob
00:34:01.290     19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:34:01.290     19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:34:01.290     19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:34:01.290      19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:01.290      19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:01.290      19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:01.290      19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH
00:34:01.290      19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:34:01.290    19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:34:01.290  Cannot find device "nvmf_init_br"
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # true
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:34:01.290  Cannot find device "nvmf_init_br2"
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # true
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:34:01.290  Cannot find device "nvmf_tgt_br"
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # true
00:34:01.290   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:34:01.290  Cannot find device "nvmf_tgt_br2"
00:34:01.291   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # true
00:34:01.291   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:34:01.291  Cannot find device "nvmf_init_br"
00:34:01.291   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # true
00:34:01.291   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:34:01.291  Cannot find device "nvmf_init_br2"
00:34:01.291   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # true
00:34:01.291   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:34:01.291  Cannot find device "nvmf_tgt_br"
00:34:01.291   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # true
00:34:01.291   19:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:34:01.291  Cannot find device "nvmf_tgt_br2"
00:34:01.291   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # true
00:34:01.291   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:34:01.291  Cannot find device "nvmf_br"
00:34:01.291   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # true
00:34:01.291   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:34:01.291  Cannot find device "nvmf_init_if"
00:34:01.291   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # true
00:34:01.291   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:34:01.291  Cannot find device "nvmf_init_if2"
00:34:01.291   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # true
00:34:01.291   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:34:01.291  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:34:01.291   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # true
00:34:01.291   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:34:01.291  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:34:01.291   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # true
00:34:01.291   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:34:01.291   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:34:01.291   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:34:01.291   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:34:01.291   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:34:01.291   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:34:01.550  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:34:01.550  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms
00:34:01.550  
00:34:01.550  --- 10.0.0.3 ping statistics ---
00:34:01.550  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:01.550  rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:34:01.550  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:34:01.550  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms
00:34:01.550  
00:34:01.550  --- 10.0.0.4 ping statistics ---
00:34:01.550  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:01.550  rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:34:01.550  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:34:01.550  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms
00:34:01.550  
00:34:01.550  --- 10.0.0.1 ping statistics ---
00:34:01.550  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:01.550  rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:34:01.550  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:34:01.550  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms
00:34:01.550  
00:34:01.550  --- 10.0.0.2 ping statistics ---
00:34:01.550  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:01.550  rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:34:01.550   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:34:01.551   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:34:01.551   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:34:01.551   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:34:01.551   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:34:01.551   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2
00:34:01.551   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:34:01.551   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable
00:34:01.551   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:34:01.551   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=126651
00:34:01.551   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2
00:34:01.551   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 126651
00:34:01.551   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 126651 ']'
00:34:01.551   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:34:01.551   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100
00:34:01.551   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:34:01.551  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:34:01.551   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable
00:34:01.551   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:34:01.810  [2024-12-13 19:18:33.430196] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:34:01.810  [2024-12-13 19:18:33.431490] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:34:01.810  [2024-12-13 19:18:33.431554] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:34:01.810  [2024-12-13 19:18:33.584754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:34:01.810  [2024-12-13 19:18:33.621796] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:34:01.810  [2024-12-13 19:18:33.621856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:34:01.810  [2024-12-13 19:18:33.621871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:34:01.810  [2024-12-13 19:18:33.621882] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:34:01.810  [2024-12-13 19:18:33.621891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:34:01.810  [2024-12-13 19:18:33.622331] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:34:02.076  [2024-12-13 19:18:33.719282] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:34:02.076  [2024-12-13 19:18:33.719638] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']'
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:34:02.076  [2024-12-13 19:18:33.803195] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:34:02.076  [2024-12-13 19:18:33.823350] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:34:02.076  malloc0
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:02.076   19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192
00:34:02.076    19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json
00:34:02.076    19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=()
00:34:02.076    19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config
00:34:02.076    19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:34:02.076    19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:34:02.076  {
00:34:02.076    "params": {
00:34:02.076      "name": "Nvme$subsystem",
00:34:02.076      "trtype": "$TEST_TRANSPORT",
00:34:02.076      "traddr": "$NVMF_FIRST_TARGET_IP",
00:34:02.076      "adrfam": "ipv4",
00:34:02.076      "trsvcid": "$NVMF_PORT",
00:34:02.076      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:34:02.076      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:34:02.076      "hdgst": ${hdgst:-false},
00:34:02.076      "ddgst": ${ddgst:-false}
00:34:02.076    },
00:34:02.076    "method": "bdev_nvme_attach_controller"
00:34:02.076  }
00:34:02.076  EOF
00:34:02.076  )")
00:34:02.076     19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat
00:34:02.076    19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq .
00:34:02.076     19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=,
00:34:02.076     19:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:34:02.076    "params": {
00:34:02.076      "name": "Nvme1",
00:34:02.076      "trtype": "tcp",
00:34:02.076      "traddr": "10.0.0.3",
00:34:02.076      "adrfam": "ipv4",
00:34:02.076      "trsvcid": "4420",
00:34:02.076      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:34:02.076      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:34:02.076      "hdgst": false,
00:34:02.076      "ddgst": false
00:34:02.076    },
00:34:02.076    "method": "bdev_nvme_attach_controller"
00:34:02.076  }'
00:34:02.380  [2024-12-13 19:18:33.928707] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:34:02.380  [2024-12-13 19:18:33.928798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126689 ]
00:34:02.380  [2024-12-13 19:18:34.084805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:34:02.380  [2024-12-13 19:18:34.126020] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:34:02.639  Running I/O for 10 seconds...
00:34:04.511       6811.00 IOPS,    53.21 MiB/s
[2024-12-13T19:18:37.713Z]      6914.00 IOPS,    54.02 MiB/s
[2024-12-13T19:18:38.649Z]      6943.67 IOPS,    54.25 MiB/s
[2024-12-13T19:18:39.585Z]      6958.00 IOPS,    54.36 MiB/s
[2024-12-13T19:18:40.521Z]      6985.20 IOPS,    54.57 MiB/s
[2024-12-13T19:18:41.458Z]      6981.50 IOPS,    54.54 MiB/s
[2024-12-13T19:18:42.409Z]      6988.86 IOPS,    54.60 MiB/s
[2024-12-13T19:18:43.476Z]      6995.25 IOPS,    54.65 MiB/s
[2024-12-13T19:18:44.412Z]      7010.22 IOPS,    54.77 MiB/s
[2024-12-13T19:18:44.412Z]      7015.80 IOPS,    54.81 MiB/s
00:34:12.588                                                                                                  Latency(us)
00:34:12.588  
[2024-12-13T19:18:44.412Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:34:12.588  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192)
00:34:12.588  	 Verification LBA range: start 0x0 length 0x1000
00:34:12.588  	 Nvme1n1             :      10.01    7020.08      54.84       0.00     0.00   18176.86    1072.41   32887.16
00:34:12.588  
[2024-12-13T19:18:44.412Z]  ===================================================================================================================
00:34:12.588  
[2024-12-13T19:18:44.412Z]  Total                       :               7020.08      54.84       0.00     0.00   18176.86    1072.41   32887.16
00:34:12.847   19:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=126796
00:34:12.847   19:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable
00:34:12.847   19:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:34:12.847   19:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192
00:34:12.847    19:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json
00:34:12.847    19:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=()
00:34:12.847    19:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config
00:34:12.847    19:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:34:12.847    19:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:34:12.847  {
00:34:12.847    "params": {
00:34:12.847      "name": "Nvme$subsystem",
00:34:12.847      "trtype": "$TEST_TRANSPORT",
00:34:12.847      "traddr": "$NVMF_FIRST_TARGET_IP",
00:34:12.847      "adrfam": "ipv4",
00:34:12.847      "trsvcid": "$NVMF_PORT",
00:34:12.847      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:34:12.847      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:34:12.847      "hdgst": ${hdgst:-false},
00:34:12.847      "ddgst": ${ddgst:-false}
00:34:12.847    },
00:34:12.847    "method": "bdev_nvme_attach_controller"
00:34:12.847  }
00:34:12.847  EOF
00:34:12.847  )")
00:34:12.847  [2024-12-13 19:18:44.574918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:12.847  [2024-12-13 19:18:44.574973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:12.847     19:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat
00:34:12.847  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:12.847    19:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq .
00:34:12.847     19:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=,
00:34:12.847     19:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:34:12.847    "params": {
00:34:12.847      "name": "Nvme1",
00:34:12.847      "trtype": "tcp",
00:34:12.847      "traddr": "10.0.0.3",
00:34:12.847      "adrfam": "ipv4",
00:34:12.847      "trsvcid": "4420",
00:34:12.847      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:34:12.847      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:34:12.847      "hdgst": false,
00:34:12.847      "ddgst": false
00:34:12.847    },
00:34:12.847    "method": "bdev_nvme_attach_controller"
00:34:12.847  }'
00:34:12.847  [2024-12-13 19:18:44.586900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:12.847  [2024-12-13 19:18:44.586939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:12.847  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:12.847  [2024-12-13 19:18:44.598865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:12.847  [2024-12-13 19:18:44.598905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:12.847  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:12.847  [2024-12-13 19:18:44.610868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:12.847  [2024-12-13 19:18:44.610906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:12.847  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:12.847  [2024-12-13 19:18:44.622850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:12.847  [2024-12-13 19:18:44.622888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:12.847  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:12.847  [2024-12-13 19:18:44.634838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:12.847  [2024-12-13 19:18:44.634876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:12.847  [2024-12-13 19:18:44.638332] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:34:12.847  [2024-12-13 19:18:44.638428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126796 ]
00:34:12.847  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:12.847  [2024-12-13 19:18:44.646865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:12.847  [2024-12-13 19:18:44.646900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:12.847  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:12.847  [2024-12-13 19:18:44.658845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:12.847  [2024-12-13 19:18:44.658881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:12.847  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.107  [2024-12-13 19:18:44.670832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.107  [2024-12-13 19:18:44.670853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.107  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.107  [2024-12-13 19:18:44.682845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.107  [2024-12-13 19:18:44.682866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.107  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.108  [2024-12-13 19:18:44.690855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.108  [2024-12-13 19:18:44.690892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.108  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.108  [2024-12-13 19:18:44.702864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.108  [2024-12-13 19:18:44.702904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.108  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.108  [2024-12-13 19:18:44.714845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.108  [2024-12-13 19:18:44.714883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.108  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.108  [2024-12-13 19:18:44.722887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.108  [2024-12-13 19:18:44.722924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.108  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.108  [2024-12-13 19:18:44.734862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.108  [2024-12-13 19:18:44.734882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.108  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.108  [2024-12-13 19:18:44.746893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.108  [2024-12-13 19:18:44.746931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.108  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.108  [2024-12-13 19:18:44.758865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.108  [2024-12-13 19:18:44.758905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.108  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.108  [2024-12-13 19:18:44.770859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.108  [2024-12-13 19:18:44.770896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.108  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.108  [2024-12-13 19:18:44.782871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.108  [2024-12-13 19:18:44.782911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.108  [2024-12-13 19:18:44.783396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:34:13.108  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.108  [2024-12-13 19:18:44.794849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.108  [2024-12-13 19:18:44.794888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.108  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.108  [2024-12-13 19:18:44.806841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.108  [2024-12-13 19:18:44.806881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.108  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.108  [2024-12-13 19:18:44.816848] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:34:13.108  [2024-12-13 19:18:44.818832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.108  [2024-12-13 19:18:44.818869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.108  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.108  [2024-12-13 19:18:44.830854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.108  [2024-12-13 19:18:44.830894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.108  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.108  [2024-12-13 19:18:44.842831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.108  [2024-12-13 19:18:44.842868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.108  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.108  [2024-12-13 19:18:44.854835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.108  [2024-12-13 19:18:44.854872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.108  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.108  [2024-12-13 19:18:44.866835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.108  [2024-12-13 19:18:44.866873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.108  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.108  [2024-12-13 19:18:44.878836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.108  [2024-12-13 19:18:44.878873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.108  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.108  [2024-12-13 19:18:44.890846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.108  [2024-12-13 19:18:44.890883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.108  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.108  [2024-12-13 19:18:44.902841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.108  [2024-12-13 19:18:44.902881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.108  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.108  [2024-12-13 19:18:44.914846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.108  [2024-12-13 19:18:44.914884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.108  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.108  [2024-12-13 19:18:44.926850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.108  [2024-12-13 19:18:44.926887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.371  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.371  [2024-12-13 19:18:44.938833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.371  [2024-12-13 19:18:44.938872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.371  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.371  [2024-12-13 19:18:44.950831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.371  [2024-12-13 19:18:44.950868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.371  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.371  [2024-12-13 19:18:44.962863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.371  [2024-12-13 19:18:44.962907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.371  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.371  [2024-12-13 19:18:44.974861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.371  [2024-12-13 19:18:44.974903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.371  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.371  [2024-12-13 19:18:44.982837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.371  [2024-12-13 19:18:44.982882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.371  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.371  [2024-12-13 19:18:44.994864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.371  [2024-12-13 19:18:44.994908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.371  2024/12/13 19:18:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.371  [2024-12-13 19:18:45.002868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.371  [2024-12-13 19:18:45.002910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.371  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.371  [2024-12-13 19:18:45.014865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.371  [2024-12-13 19:18:45.014911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.371  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.371  [2024-12-13 19:18:45.023035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.371  [2024-12-13 19:18:45.023083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.371  Running I/O for 5 seconds...
00:34:13.371  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.371  [2024-12-13 19:18:45.040258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.371  [2024-12-13 19:18:45.040304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.371  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.371  [2024-12-13 19:18:45.057737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.371  [2024-12-13 19:18:45.057776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.371  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.371  [2024-12-13 19:18:45.071722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.371  [2024-12-13 19:18:45.071752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.371  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.371  [2024-12-13 19:18:45.089915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.371  [2024-12-13 19:18:45.089958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.371  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.372  [2024-12-13 19:18:45.101213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.372  [2024-12-13 19:18:45.101267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.372  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.372  [2024-12-13 19:18:45.117657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.372  [2024-12-13 19:18:45.117686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.372  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.372  [2024-12-13 19:18:45.133038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.372  [2024-12-13 19:18:45.133067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.372  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.372  [2024-12-13 19:18:45.148994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.372  [2024-12-13 19:18:45.149040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.372  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.372  [2024-12-13 19:18:45.163785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.372  [2024-12-13 19:18:45.163814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.372  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.372  [2024-12-13 19:18:45.182175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.372  [2024-12-13 19:18:45.182205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.372  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.632  [2024-12-13 19:18:45.194369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.632  [2024-12-13 19:18:45.194413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.632  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.632  [2024-12-13 19:18:45.208007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.632  [2024-12-13 19:18:45.208037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.632  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.632  [2024-12-13 19:18:45.225422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.632  [2024-12-13 19:18:45.225452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.632  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.632  [2024-12-13 19:18:45.238415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.632  [2024-12-13 19:18:45.238459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.632  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.632  [2024-12-13 19:18:45.251388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.632  [2024-12-13 19:18:45.251417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.632  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.632  [2024-12-13 19:18:45.269996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.632  [2024-12-13 19:18:45.270023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.632  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.632  [2024-12-13 19:18:45.284264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.632  [2024-12-13 19:18:45.284309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.632  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.632  [2024-12-13 19:18:45.292485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.632  [2024-12-13 19:18:45.292530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.632  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.632  [2024-12-13 19:18:45.306298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.632  [2024-12-13 19:18:45.306327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.632  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.632  [2024-12-13 19:18:45.319529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.632  [2024-12-13 19:18:45.319558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.632  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.632  [2024-12-13 19:18:45.328468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.632  [2024-12-13 19:18:45.328513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.632  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.632  [2024-12-13 19:18:45.342647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.632  [2024-12-13 19:18:45.342675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.632  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.632  [2024-12-13 19:18:45.355467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.632  [2024-12-13 19:18:45.355497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.632  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.632  [2024-12-13 19:18:45.373656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.632  [2024-12-13 19:18:45.373687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.632  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.632  [2024-12-13 19:18:45.386249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.632  [2024-12-13 19:18:45.386305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.632  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.632  [2024-12-13 19:18:45.400077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.632  [2024-12-13 19:18:45.400107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.632  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.632  [2024-12-13 19:18:45.417991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.632  [2024-12-13 19:18:45.418036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.632  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.632  [2024-12-13 19:18:45.432972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.632  [2024-12-13 19:18:45.433018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.632  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.632  [2024-12-13 19:18:45.450317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.632  [2024-12-13 19:18:45.450370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.632  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.891  [2024-12-13 19:18:45.460399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.891  [2024-12-13 19:18:45.460442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.891  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.891  [2024-12-13 19:18:45.476720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.891  [2024-12-13 19:18:45.476749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.891  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.891  [2024-12-13 19:18:45.494297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.891  [2024-12-13 19:18:45.494325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.891  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.891  [2024-12-13 19:18:45.505577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.891  [2024-12-13 19:18:45.505621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.892  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.892  [2024-12-13 19:18:45.521178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.892  [2024-12-13 19:18:45.521207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.892  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.892  [2024-12-13 19:18:45.537817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.892  [2024-12-13 19:18:45.537862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.892  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.892  [2024-12-13 19:18:45.550778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.892  [2024-12-13 19:18:45.550821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.892  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.892  [2024-12-13 19:18:45.559546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.892  [2024-12-13 19:18:45.559589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.892  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.892  [2024-12-13 19:18:45.574318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.892  [2024-12-13 19:18:45.574345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.892  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.892  [2024-12-13 19:18:45.587474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.892  [2024-12-13 19:18:45.587502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.892  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.892  [2024-12-13 19:18:45.605999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.892  [2024-12-13 19:18:45.606044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.892  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.892  [2024-12-13 19:18:45.617801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.892  [2024-12-13 19:18:45.617831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.892  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.892  [2024-12-13 19:18:45.634209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.892  [2024-12-13 19:18:45.634249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.892  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.892  [2024-12-13 19:18:45.646144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.892  [2024-12-13 19:18:45.646189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.892  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.892  [2024-12-13 19:18:45.659958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.892  [2024-12-13 19:18:45.659989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.892  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.892  [2024-12-13 19:18:45.677840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.892  [2024-12-13 19:18:45.677868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.892  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.892  [2024-12-13 19:18:45.691050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.892  [2024-12-13 19:18:45.691095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.892  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:13.892  [2024-12-13 19:18:45.709202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:13.892  [2024-12-13 19:18:45.709243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:13.892  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.151  [2024-12-13 19:18:45.723731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.151  [2024-12-13 19:18:45.723775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.151  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.151  [2024-12-13 19:18:45.741543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.151  [2024-12-13 19:18:45.741587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.151  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.151  [2024-12-13 19:18:45.756547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.151  [2024-12-13 19:18:45.756592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.151  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.151  [2024-12-13 19:18:45.772833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.151  [2024-12-13 19:18:45.772863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.151  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.151  [2024-12-13 19:18:45.789922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.151  [2024-12-13 19:18:45.789967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.151  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.151  [2024-12-13 19:18:45.805108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.151  [2024-12-13 19:18:45.805135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.151  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.151  [2024-12-13 19:18:45.821556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.151  [2024-12-13 19:18:45.821603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.151  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.151  [2024-12-13 19:18:45.836395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.151  [2024-12-13 19:18:45.836440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.151  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.151  [2024-12-13 19:18:45.854075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.151  [2024-12-13 19:18:45.854120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.151  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.151  [2024-12-13 19:18:45.867391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.151  [2024-12-13 19:18:45.867436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.151  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.151  [2024-12-13 19:18:45.875911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.151  [2024-12-13 19:18:45.875941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.151  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.151  [2024-12-13 19:18:45.890652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.151  [2024-12-13 19:18:45.890681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.151  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.151  [2024-12-13 19:18:45.903208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.151  [2024-12-13 19:18:45.903250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.151  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.152  [2024-12-13 19:18:45.922039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.152  [2024-12-13 19:18:45.922069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.152  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.152  [2024-12-13 19:18:45.934959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.152  [2024-12-13 19:18:45.935002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.152  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.152  [2024-12-13 19:18:45.943563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.152  [2024-12-13 19:18:45.943591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.152  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.152  [2024-12-13 19:18:45.959324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.152  [2024-12-13 19:18:45.959352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.152  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.411  [2024-12-13 19:18:45.977257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.411  [2024-12-13 19:18:45.977285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.411  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.411  [2024-12-13 19:18:45.991957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.411  [2024-12-13 19:18:45.991987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.411  2024/12/13 19:18:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.411  [2024-12-13 19:18:46.010230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.411  [2024-12-13 19:18:46.010258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.411  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.411  [2024-12-13 19:18:46.022546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.411  [2024-12-13 19:18:46.022590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.411  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.411      13143.00 IOPS,   102.68 MiB/s
[2024-12-13T19:18:46.235Z] [2024-12-13 19:18:46.031617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.411  [2024-12-13 19:18:46.031647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.411  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.411  [2024-12-13 19:18:46.046301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.411  [2024-12-13 19:18:46.046330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.411  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.411  [2024-12-13 19:18:46.059132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.411  [2024-12-13 19:18:46.059177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.411  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.411  [2024-12-13 19:18:46.068817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.411  [2024-12-13 19:18:46.068863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.411  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.411  [2024-12-13 19:18:46.083674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.411  [2024-12-13 19:18:46.083719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.411  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.411  [2024-12-13 19:18:46.103170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.411  [2024-12-13 19:18:46.103217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.412  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.412  [2024-12-13 19:18:46.122290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.412  [2024-12-13 19:18:46.122335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.412  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.412  [2024-12-13 19:18:46.132456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.412  [2024-12-13 19:18:46.132513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.412  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.412  [2024-12-13 19:18:46.146612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.412  [2024-12-13 19:18:46.146657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.412  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.412  [2024-12-13 19:18:46.155647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.412  [2024-12-13 19:18:46.155691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.412  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.412  [2024-12-13 19:18:46.171725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.412  [2024-12-13 19:18:46.171771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.412  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.412  [2024-12-13 19:18:46.190192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.412  [2024-12-13 19:18:46.190249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.412  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.412  [2024-12-13 19:18:46.202095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.412  [2024-12-13 19:18:46.202142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.412  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.412  [2024-12-13 19:18:46.216170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.412  [2024-12-13 19:18:46.216216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.412  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.671  [2024-12-13 19:18:46.234711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.671  [2024-12-13 19:18:46.234757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.671  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.671  [2024-12-13 19:18:46.244740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.671  [2024-12-13 19:18:46.244786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.671  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.671  [2024-12-13 19:18:46.260475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.671  [2024-12-13 19:18:46.260521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.671  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.671  [2024-12-13 19:18:46.278054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.671  [2024-12-13 19:18:46.278083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.671  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.671  [2024-12-13 19:18:46.291835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.671  [2024-12-13 19:18:46.291881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.671  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.671  [2024-12-13 19:18:46.310080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.671  [2024-12-13 19:18:46.310126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.671  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.671  [2024-12-13 19:18:46.323114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.671  [2024-12-13 19:18:46.323143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.671  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.671  [2024-12-13 19:18:46.341946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.671  [2024-12-13 19:18:46.341993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.671  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.671  [2024-12-13 19:18:46.355885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.671  [2024-12-13 19:18:46.355932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.671  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.671  [2024-12-13 19:18:46.374869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.671  [2024-12-13 19:18:46.374914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.671  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.671  [2024-12-13 19:18:46.384910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.671  [2024-12-13 19:18:46.384953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.671  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.671  [2024-12-13 19:18:46.398869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.671  [2024-12-13 19:18:46.398897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.671  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.671  [2024-12-13 19:18:46.408252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.671  [2024-12-13 19:18:46.408296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.671  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.671  [2024-12-13 19:18:46.423081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.671  [2024-12-13 19:18:46.423110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.671  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.671  [2024-12-13 19:18:46.431790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.671  [2024-12-13 19:18:46.431834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.671  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.671  [2024-12-13 19:18:46.446697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.671  [2024-12-13 19:18:46.446728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.671  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.671  [2024-12-13 19:18:46.455784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.671  [2024-12-13 19:18:46.455828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.672  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.672  [2024-12-13 19:18:46.466236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.672  [2024-12-13 19:18:46.466279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.672  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.672  [2024-12-13 19:18:46.477999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.672  [2024-12-13 19:18:46.478058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.672  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.672  [2024-12-13 19:18:46.492298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.672  [2024-12-13 19:18:46.492342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.931  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.931  [2024-12-13 19:18:46.509965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.931  [2024-12-13 19:18:46.509995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.931  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.931  [2024-12-13 19:18:46.523706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.931  [2024-12-13 19:18:46.523751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.931  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.931  [2024-12-13 19:18:46.542086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.931  [2024-12-13 19:18:46.542115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.931  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.931  [2024-12-13 19:18:46.554165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.931  [2024-12-13 19:18:46.554195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.931  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.931  [2024-12-13 19:18:46.567877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.931  [2024-12-13 19:18:46.567906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.931  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.931  [2024-12-13 19:18:46.586277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.931  [2024-12-13 19:18:46.586320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.931  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.931  [2024-12-13 19:18:46.597362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.931  [2024-12-13 19:18:46.597404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.931  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.931  [2024-12-13 19:18:46.612833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.931  [2024-12-13 19:18:46.612877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.931  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.931  [2024-12-13 19:18:46.629631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.931  [2024-12-13 19:18:46.629660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.931  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.932  [2024-12-13 19:18:46.644368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.932  [2024-12-13 19:18:46.644398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.932  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.932  [2024-12-13 19:18:46.662068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.932  [2024-12-13 19:18:46.662097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.932  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.932  [2024-12-13 19:18:46.675136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.932  [2024-12-13 19:18:46.675181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.932  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.932  [2024-12-13 19:18:46.693703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.932  [2024-12-13 19:18:46.693754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.932  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.932  [2024-12-13 19:18:46.707520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.932  [2024-12-13 19:18:46.707565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.932  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.932  [2024-12-13 19:18:46.726152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.932  [2024-12-13 19:18:46.726182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.932  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:14.932  [2024-12-13 19:18:46.739075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:14.932  [2024-12-13 19:18:46.739104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:14.932  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.191  [2024-12-13 19:18:46.758672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.191  [2024-12-13 19:18:46.758703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.191  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.191  [2024-12-13 19:18:46.768750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.191  [2024-12-13 19:18:46.768795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.191  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.191  [2024-12-13 19:18:46.785174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.191  [2024-12-13 19:18:46.785202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.191  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.191  [2024-12-13 19:18:46.799870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.191  [2024-12-13 19:18:46.799914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.191  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.191  [2024-12-13 19:18:46.818437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.191  [2024-12-13 19:18:46.818466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.191  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.191  [2024-12-13 19:18:46.829144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.191  [2024-12-13 19:18:46.829187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.192  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.192  [2024-12-13 19:18:46.844784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.192  [2024-12-13 19:18:46.844813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.192  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.192  [2024-12-13 19:18:46.861858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.192  [2024-12-13 19:18:46.861888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.192  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.192  [2024-12-13 19:18:46.877241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.192  [2024-12-13 19:18:46.877284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.192  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.192  [2024-12-13 19:18:46.893524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.192  [2024-12-13 19:18:46.893553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.192  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.192  [2024-12-13 19:18:46.908103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.192  [2024-12-13 19:18:46.908148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.192  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.192  [2024-12-13 19:18:46.916638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.192  [2024-12-13 19:18:46.916668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.192  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.192  [2024-12-13 19:18:46.930546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.192  [2024-12-13 19:18:46.930574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.192  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.192  [2024-12-13 19:18:46.942455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.192  [2024-12-13 19:18:46.942485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.192  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.192  [2024-12-13 19:18:46.956210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.192  [2024-12-13 19:18:46.956254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.192  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.192  [2024-12-13 19:18:46.974457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.192  [2024-12-13 19:18:46.974488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.192  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.192  [2024-12-13 19:18:46.986013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.192  [2024-12-13 19:18:46.986058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.192  2024/12/13 19:18:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.192  [2024-12-13 19:18:46.999840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.192  [2024-12-13 19:18:46.999871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.192  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.452  [2024-12-13 19:18:47.018690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.452  [2024-12-13 19:18:47.018720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.452  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.452      13079.50 IOPS,   102.18 MiB/s
[2024-12-13T19:18:47.276Z] [2024-12-13 19:18:47.030474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.452  [2024-12-13 19:18:47.030518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.452  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.452  [2024-12-13 19:18:47.043137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.452  [2024-12-13 19:18:47.043166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.452  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.452  [2024-12-13 19:18:47.061419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.452  [2024-12-13 19:18:47.061447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.452  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.452  [2024-12-13 19:18:47.075923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.452  [2024-12-13 19:18:47.075969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.452  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.452  [2024-12-13 19:18:47.093953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.452  [2024-12-13 19:18:47.093984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.452  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.452  [2024-12-13 19:18:47.106747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.452  [2024-12-13 19:18:47.106797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.452  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.452  [2024-12-13 19:18:47.115511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.452  [2024-12-13 19:18:47.115555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.452  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.452  [2024-12-13 19:18:47.132120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.452  [2024-12-13 19:18:47.132150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.452  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.452  [2024-12-13 19:18:47.140655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.452  [2024-12-13 19:18:47.140699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.452  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.452  [2024-12-13 19:18:47.154233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.452  [2024-12-13 19:18:47.154260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.452  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.452  [2024-12-13 19:18:47.167597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.452  [2024-12-13 19:18:47.167626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.452  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.452  [2024-12-13 19:18:47.186090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.452  [2024-12-13 19:18:47.186120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.452  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.452  [2024-12-13 19:18:47.198347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.452  [2024-12-13 19:18:47.198392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.452  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.452  [2024-12-13 19:18:47.210457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.452  [2024-12-13 19:18:47.210502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.452  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.452  [2024-12-13 19:18:47.224251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.452  [2024-12-13 19:18:47.224295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.452  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.452  [2024-12-13 19:18:47.242329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.452  [2024-12-13 19:18:47.242358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.453  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.453  [2024-12-13 19:18:47.253306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.453  [2024-12-13 19:18:47.253349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.453  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.453  [2024-12-13 19:18:47.269330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.453  [2024-12-13 19:18:47.269358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.453  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.711  [2024-12-13 19:18:47.283859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.711  [2024-12-13 19:18:47.283889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.712  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.712  [2024-12-13 19:18:47.301971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.712  [2024-12-13 19:18:47.302015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.712  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.712  [2024-12-13 19:18:47.315607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.712  [2024-12-13 19:18:47.315653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.712  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.712  [2024-12-13 19:18:47.324547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.712  [2024-12-13 19:18:47.324590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.712  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.712  [2024-12-13 19:18:47.338048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.712  [2024-12-13 19:18:47.338077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.712  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.712  [2024-12-13 19:18:47.353664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.712  [2024-12-13 19:18:47.353693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.712  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.712  [2024-12-13 19:18:47.368551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.712  [2024-12-13 19:18:47.368596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.712  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.712  [2024-12-13 19:18:47.386619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.712  [2024-12-13 19:18:47.386665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.712  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.712  [2024-12-13 19:18:47.397327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.712  [2024-12-13 19:18:47.397372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.712  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.712  [2024-12-13 19:18:47.410142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.712  [2024-12-13 19:18:47.410204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.712  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.712  [2024-12-13 19:18:47.420037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.712  [2024-12-13 19:18:47.420067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.712  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.712  [2024-12-13 19:18:47.435248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.712  [2024-12-13 19:18:47.435293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.712  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.712  [2024-12-13 19:18:47.454900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.712  [2024-12-13 19:18:47.454944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.712  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.712  [2024-12-13 19:18:47.464894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.712  [2024-12-13 19:18:47.464938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.712  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.712  [2024-12-13 19:18:47.479158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.712  [2024-12-13 19:18:47.479203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.712  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.712  [2024-12-13 19:18:47.498204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.712  [2024-12-13 19:18:47.498262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.712  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.712  [2024-12-13 19:18:47.512010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.712  [2024-12-13 19:18:47.512041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.712  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.712  [2024-12-13 19:18:47.529605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.712  [2024-12-13 19:18:47.529651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.712  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.971  [2024-12-13 19:18:47.544057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.971  [2024-12-13 19:18:47.544103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.972  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.972  [2024-12-13 19:18:47.562078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.972  [2024-12-13 19:18:47.562140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.972  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.972  [2024-12-13 19:18:47.572977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.972  [2024-12-13 19:18:47.573022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.972  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.972  [2024-12-13 19:18:47.586742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.972  [2024-12-13 19:18:47.586791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.972  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.972  [2024-12-13 19:18:47.596413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.972  [2024-12-13 19:18:47.596443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.972  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.972  [2024-12-13 19:18:47.611367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.972  [2024-12-13 19:18:47.611413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.972  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.972  [2024-12-13 19:18:47.629957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.972  [2024-12-13 19:18:47.629991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.972  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.972  [2024-12-13 19:18:47.643390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.972  [2024-12-13 19:18:47.643437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.972  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.972  [2024-12-13 19:18:47.662843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.972  [2024-12-13 19:18:47.662888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.972  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.972  [2024-12-13 19:18:47.672895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.972  [2024-12-13 19:18:47.672940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.972  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.972  [2024-12-13 19:18:47.690589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.972  [2024-12-13 19:18:47.690632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.972  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.972  [2024-12-13 19:18:47.701967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.972  [2024-12-13 19:18:47.701997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.972  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.972  [2024-12-13 19:18:47.717763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.972  [2024-12-13 19:18:47.717812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.972  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.972  [2024-12-13 19:18:47.732002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.972  [2024-12-13 19:18:47.732047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.972  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.972  [2024-12-13 19:18:47.750024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.972  [2024-12-13 19:18:47.750053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.972  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.972  [2024-12-13 19:18:47.764374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.972  [2024-12-13 19:18:47.764417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.972  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.972  [2024-12-13 19:18:47.772785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.972  [2024-12-13 19:18:47.772829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.972  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:15.972  [2024-12-13 19:18:47.787719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:15.972  [2024-12-13 19:18:47.787765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:15.972  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.231  [2024-12-13 19:18:47.804939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.231  [2024-12-13 19:18:47.804965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.231  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.231  [2024-12-13 19:18:47.822287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.231  [2024-12-13 19:18:47.822331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.231  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.231  [2024-12-13 19:18:47.841560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.231  [2024-12-13 19:18:47.841606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.231  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.231  [2024-12-13 19:18:47.855983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.232  [2024-12-13 19:18:47.856012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.232  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.232  [2024-12-13 19:18:47.873822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.232  [2024-12-13 19:18:47.873856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.232  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.232  [2024-12-13 19:18:47.885299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.232  [2024-12-13 19:18:47.885332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.232  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.232  [2024-12-13 19:18:47.901895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.232  [2024-12-13 19:18:47.901930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.232  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.232  [2024-12-13 19:18:47.916078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.232  [2024-12-13 19:18:47.916247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.232  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.232  [2024-12-13 19:18:47.934343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.232  [2024-12-13 19:18:47.934493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.232  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.232  [2024-12-13 19:18:47.947363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.232  [2024-12-13 19:18:47.947545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.232  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.232  [2024-12-13 19:18:47.965482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.232  [2024-12-13 19:18:47.965631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.232  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.232  [2024-12-13 19:18:47.977358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.232  [2024-12-13 19:18:47.977537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.232  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.232  [2024-12-13 19:18:47.993368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.232  [2024-12-13 19:18:47.993400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.232  2024/12/13 19:18:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.232  [2024-12-13 19:18:48.009316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.232  [2024-12-13 19:18:48.009349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.232  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.232  [2024-12-13 19:18:48.025880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.232  [2024-12-13 19:18:48.025913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.232      13049.00 IOPS,   101.95 MiB/s
[2024-12-13T19:18:48.056Z] 2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.232  [2024-12-13 19:18:48.038899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.232  [2024-12-13 19:18:48.039084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.232  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.232  [2024-12-13 19:18:48.048179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.232  [2024-12-13 19:18:48.048378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.232  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.491  [2024-12-13 19:18:48.063407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.491  [2024-12-13 19:18:48.063558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.491  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.492  [2024-12-13 19:18:48.081853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.492  [2024-12-13 19:18:48.082038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.492  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.492  [2024-12-13 19:18:48.095175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.492  [2024-12-13 19:18:48.095374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.492  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.492  [2024-12-13 19:18:48.113398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.492  [2024-12-13 19:18:48.113548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.492  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.492  [2024-12-13 19:18:48.128892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.492  [2024-12-13 19:18:48.129073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.492  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.492  [2024-12-13 19:18:48.146365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.492  [2024-12-13 19:18:48.146399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.492  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.492  [2024-12-13 19:18:48.164167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.492  [2024-12-13 19:18:48.164327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.492  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.492  [2024-12-13 19:18:48.182153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.492  [2024-12-13 19:18:48.182187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.492  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.492  [2024-12-13 19:18:48.193396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.492  [2024-12-13 19:18:48.193547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.492  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.492  [2024-12-13 19:18:48.209036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.492  [2024-12-13 19:18:48.209185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.492  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.492  [2024-12-13 19:18:48.225281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.492  [2024-12-13 19:18:48.225432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.492  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.492  [2024-12-13 19:18:48.241537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.492  [2024-12-13 19:18:48.241718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.492  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.492  [2024-12-13 19:18:48.255981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.492  [2024-12-13 19:18:48.256163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.492  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.492  [2024-12-13 19:18:48.264855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.492  [2024-12-13 19:18:48.265002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.492  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.492  [2024-12-13 19:18:48.278479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.492  [2024-12-13 19:18:48.278628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.492  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.492  [2024-12-13 19:18:48.287818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.492  [2024-12-13 19:18:48.287963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.492  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.492  [2024-12-13 19:18:48.303528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.492  [2024-12-13 19:18:48.303562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.492  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.751  [2024-12-13 19:18:48.321992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.751  [2024-12-13 19:18:48.322036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.751  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.751  [2024-12-13 19:18:48.331662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.751  [2024-12-13 19:18:48.331813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.751  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.751  [2024-12-13 19:18:48.347314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.751  [2024-12-13 19:18:48.347465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.752  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.752  [2024-12-13 19:18:48.365863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.752  [2024-12-13 19:18:48.366013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.752  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.752  [2024-12-13 19:18:48.379171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.752  [2024-12-13 19:18:48.379340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.752  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.752  [2024-12-13 19:18:48.397573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.752  [2024-12-13 19:18:48.397724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.752  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.752  [2024-12-13 19:18:48.409391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.752  [2024-12-13 19:18:48.409572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.752  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.752  [2024-12-13 19:18:48.425779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.752  [2024-12-13 19:18:48.425962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.752  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.752  [2024-12-13 19:18:48.441883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.752  [2024-12-13 19:18:48.442068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.752  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.752  [2024-12-13 19:18:48.456111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.752  [2024-12-13 19:18:48.456280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.752  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.752  [2024-12-13 19:18:48.464740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.752  [2024-12-13 19:18:48.464775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.752  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.752  [2024-12-13 19:18:48.478290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.752  [2024-12-13 19:18:48.478323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.752  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.752  [2024-12-13 19:18:48.491648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.752  [2024-12-13 19:18:48.491682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.752  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.752  [2024-12-13 19:18:48.510097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.752  [2024-12-13 19:18:48.510256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.752  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.752  [2024-12-13 19:18:48.521454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.752  [2024-12-13 19:18:48.521627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.752  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.752  [2024-12-13 19:18:48.537204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.752  [2024-12-13 19:18:48.537365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.752  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.752  [2024-12-13 19:18:48.554458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.752  [2024-12-13 19:18:48.554606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.752  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:16.752  [2024-12-13 19:18:48.565447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:16.752  [2024-12-13 19:18:48.565610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:16.752  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.011  [2024-12-13 19:18:48.581477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.012  [2024-12-13 19:18:48.581654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.012  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.012  [2024-12-13 19:18:48.596077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.012  [2024-12-13 19:18:48.596264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.012  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.012  [2024-12-13 19:18:48.605330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.012  [2024-12-13 19:18:48.605361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.012  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.012  [2024-12-13 19:18:48.619593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.012  [2024-12-13 19:18:48.619629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.012  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.012  [2024-12-13 19:18:48.638298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.012  [2024-12-13 19:18:48.638331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.012  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.012  [2024-12-13 19:18:48.651088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.012  [2024-12-13 19:18:48.651281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.012  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.012  [2024-12-13 19:18:48.669573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.012  [2024-12-13 19:18:48.669723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.012  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.012  [2024-12-13 19:18:48.683351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.012  [2024-12-13 19:18:48.683500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.012  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.012  [2024-12-13 19:18:48.701031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.012  [2024-12-13 19:18:48.701180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.012  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.012  [2024-12-13 19:18:48.716580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.012  [2024-12-13 19:18:48.716732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.012  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.012  [2024-12-13 19:18:48.733711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.012  [2024-12-13 19:18:48.733910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.012  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.012  [2024-12-13 19:18:48.744213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.012  [2024-12-13 19:18:48.744291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.012  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.012  [2024-12-13 19:18:48.760221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.012  [2024-12-13 19:18:48.760319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.012  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.012  [2024-12-13 19:18:48.776067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.012  [2024-12-13 19:18:48.776102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.012  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.012  [2024-12-13 19:18:48.794866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.012  [2024-12-13 19:18:48.794901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.012  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.012  [2024-12-13 19:18:48.804801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.012  [2024-12-13 19:18:48.804985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.012  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.012  [2024-12-13 19:18:48.817526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.012  [2024-12-13 19:18:48.817739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.012  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.012  [2024-12-13 19:18:48.828171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.012  [2024-12-13 19:18:48.828379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.012  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.271  [2024-12-13 19:18:48.843131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.271  [2024-12-13 19:18:48.843350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.272  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.272  [2024-12-13 19:18:48.854175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.272  [2024-12-13 19:18:48.854417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.272  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.272  [2024-12-13 19:18:48.866048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.272  [2024-12-13 19:18:48.866325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.272  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.272  [2024-12-13 19:18:48.877014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.272  [2024-12-13 19:18:48.877050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.272  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.272  [2024-12-13 19:18:48.891401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.272  [2024-12-13 19:18:48.891435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.272  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.272  [2024-12-13 19:18:48.910925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.272  [2024-12-13 19:18:48.910957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.272  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.272  [2024-12-13 19:18:48.921131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.272  [2024-12-13 19:18:48.921331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.272  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.272  [2024-12-13 19:18:48.935865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.272  [2024-12-13 19:18:48.936048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.272  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.272  [2024-12-13 19:18:48.954011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.272  [2024-12-13 19:18:48.954214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.272  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.272  [2024-12-13 19:18:48.967718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.272  [2024-12-13 19:18:48.967905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.272  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.272  [2024-12-13 19:18:48.986969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.272  [2024-12-13 19:18:48.987155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.272  2024/12/13 19:18:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.272  [2024-12-13 19:18:48.997359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.272  [2024-12-13 19:18:48.997546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.272  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.272  [2024-12-13 19:18:49.010008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.272  [2024-12-13 19:18:49.010209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.272  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.272  [2024-12-13 19:18:49.023780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.272  [2024-12-13 19:18:49.023814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.272      12991.75 IOPS,   101.50 MiB/s
[2024-12-13T19:18:49.096Z] 2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.272  [2024-12-13 19:18:49.042241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.272  [2024-12-13 19:18:49.042276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.272  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.272  [2024-12-13 19:18:49.053616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.272  [2024-12-13 19:18:49.053650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.272  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.272  [2024-12-13 19:18:49.068613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.272  [2024-12-13 19:18:49.068766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.272  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.272  [2024-12-13 19:18:49.085351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.272  [2024-12-13 19:18:49.085501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.272  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.532  [2024-12-13 19:18:49.101278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.532  [2024-12-13 19:18:49.101428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.532  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.532  [2024-12-13 19:18:49.117342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.532  [2024-12-13 19:18:49.117491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.532  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.532  [2024-12-13 19:18:49.133601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.532  [2024-12-13 19:18:49.133756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.532  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.532  [2024-12-13 19:18:49.145106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.532  [2024-12-13 19:18:49.145317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.532  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.532  [2024-12-13 19:18:49.159599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.532  [2024-12-13 19:18:49.159752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.532  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.532  [2024-12-13 19:18:49.177696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.532  [2024-12-13 19:18:49.177730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.532  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.532  [2024-12-13 19:18:49.189259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.532  [2024-12-13 19:18:49.189291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.532  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.532  [2024-12-13 19:18:49.204978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.532  [2024-12-13 19:18:49.205012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.532  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.532  [2024-12-13 19:18:49.222070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.532  [2024-12-13 19:18:49.222103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.532  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.532  [2024-12-13 19:18:49.236151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.532  [2024-12-13 19:18:49.236342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.532  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.532  [2024-12-13 19:18:49.253903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.532  [2024-12-13 19:18:49.254101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.532  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.532  [2024-12-13 19:18:49.267138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.532  [2024-12-13 19:18:49.267334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.532  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.532  [2024-12-13 19:18:49.286022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.532  [2024-12-13 19:18:49.286196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.532  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.532  [2024-12-13 19:18:49.297994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.532  [2024-12-13 19:18:49.298190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.532  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.532  [2024-12-13 19:18:49.311514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.532  [2024-12-13 19:18:49.311665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.532  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.532  [2024-12-13 19:18:49.330278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.532  [2024-12-13 19:18:49.330428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.532  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.532  [2024-12-13 19:18:49.343346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.532  [2024-12-13 19:18:49.343381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.532  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.791  [2024-12-13 19:18:49.361604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.791  [2024-12-13 19:18:49.361640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.792  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.792  [2024-12-13 19:18:49.376749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.792  [2024-12-13 19:18:49.376784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.792  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.792  [2024-12-13 19:18:49.393713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.792  [2024-12-13 19:18:49.393906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.792  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.792  [2024-12-13 19:18:49.409407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.792  [2024-12-13 19:18:49.409556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.792  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.792  [2024-12-13 19:18:49.423606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.792  [2024-12-13 19:18:49.423755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.792  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.792  [2024-12-13 19:18:49.441633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.792  [2024-12-13 19:18:49.441801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.792  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.792  [2024-12-13 19:18:49.455952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.792  [2024-12-13 19:18:49.456133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.792  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.792  [2024-12-13 19:18:49.473517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.792  [2024-12-13 19:18:49.473667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.792  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.792  [2024-12-13 19:18:49.488027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.792  [2024-12-13 19:18:49.488178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.792  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.792  [2024-12-13 19:18:49.506537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.792  [2024-12-13 19:18:49.506716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.792  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.792  [2024-12-13 19:18:49.519358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.792  [2024-12-13 19:18:49.519537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.792  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.792  [2024-12-13 19:18:49.528306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.792  [2024-12-13 19:18:49.528339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.792  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.792  [2024-12-13 19:18:49.543306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.792  [2024-12-13 19:18:49.543339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.792  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.792  [2024-12-13 19:18:49.551720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.792  [2024-12-13 19:18:49.551869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.792  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.792  [2024-12-13 19:18:49.566613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.792  [2024-12-13 19:18:49.566769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.792  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.792  [2024-12-13 19:18:49.574959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.792  [2024-12-13 19:18:49.575109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.792  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.792  [2024-12-13 19:18:49.585856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.792  [2024-12-13 19:18:49.586050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.792  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.792  [2024-12-13 19:18:49.598976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.792  [2024-12-13 19:18:49.599160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.792  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:17.792  [2024-12-13 19:18:49.608147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:17.792  [2024-12-13 19:18:49.608340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:17.792  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.051  [2024-12-13 19:18:49.623314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.051  [2024-12-13 19:18:49.623462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.051  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.051  [2024-12-13 19:18:49.641696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.051  [2024-12-13 19:18:49.641732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.051  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.051  [2024-12-13 19:18:49.656007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.051  [2024-12-13 19:18:49.656041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.051  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.051  [2024-12-13 19:18:49.674535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.051  [2024-12-13 19:18:49.674570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.052  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.052  [2024-12-13 19:18:49.686033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.052  [2024-12-13 19:18:49.686185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.052  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.052  [2024-12-13 19:18:49.700145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.052  [2024-12-13 19:18:49.700304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.052  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.052  [2024-12-13 19:18:49.717462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.052  [2024-12-13 19:18:49.717642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.052  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.052  [2024-12-13 19:18:49.733667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.052  [2024-12-13 19:18:49.733822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.052  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.052  [2024-12-13 19:18:49.748093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.052  [2024-12-13 19:18:49.748356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.052  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.052  [2024-12-13 19:18:49.766542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.052  [2024-12-13 19:18:49.766704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.052  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.052  [2024-12-13 19:18:49.775924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.052  [2024-12-13 19:18:49.776073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.052  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.052  [2024-12-13 19:18:49.793576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.052  [2024-12-13 19:18:49.793723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.052  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.052  [2024-12-13 19:18:49.808278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.052  [2024-12-13 19:18:49.808310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.052  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.052  [2024-12-13 19:18:49.825391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.052  [2024-12-13 19:18:49.825425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.052  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.052  [2024-12-13 19:18:49.842744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.052  [2024-12-13 19:18:49.842915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.052  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.052  [2024-12-13 19:18:49.852764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.052  [2024-12-13 19:18:49.852947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.052  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.052  [2024-12-13 19:18:49.869083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.052  [2024-12-13 19:18:49.869291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.052  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.311  [2024-12-13 19:18:49.884223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.312  [2024-12-13 19:18:49.884461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.312  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.312  [2024-12-13 19:18:49.901735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.312  [2024-12-13 19:18:49.901951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.312  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.312  [2024-12-13 19:18:49.914024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.312  [2024-12-13 19:18:49.914253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.312  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.312  [2024-12-13 19:18:49.927158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.312  [2024-12-13 19:18:49.927193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.312  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.312  [2024-12-13 19:18:49.945255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.312  [2024-12-13 19:18:49.945406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.312  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.312  [2024-12-13 19:18:49.961550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.312  [2024-12-13 19:18:49.961699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.312  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.312  [2024-12-13 19:18:49.975648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.312  [2024-12-13 19:18:49.975800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.312  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.312  [2024-12-13 19:18:49.994201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.312  [2024-12-13 19:18:49.994364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.312  2024/12/13 19:18:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.312  [2024-12-13 19:18:50.015112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.312  [2024-12-13 19:18:50.015149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.312  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.312  [2024-12-13 19:18:50.026580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.312      13017.20 IOPS,   101.70 MiB/s
[2024-12-13T19:18:50.136Z] [2024-12-13 19:18:50.026782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.312  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.312  [2024-12-13 19:18:50.035666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.312  [2024-12-13 19:18:50.035701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.312  
00:34:18.312                                                                                                  Latency(us)
00:34:18.312  
[2024-12-13T19:18:50.136Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:34:18.312  Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192)
00:34:18.312  	 Nvme1n1             :       5.01   13014.63     101.68       0.00     0.00    9823.38    2353.34   17277.67
00:34:18.312  
[2024-12-13T19:18:50.136Z]  ===================================================================================================================
00:34:18.312  
[2024-12-13T19:18:50.136Z]  Total                       :              13014.63     101.68       0.00     0.00    9823.38    2353.34   17277.67
00:34:18.312  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.312  [2024-12-13 19:18:50.042854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.312  [2024-12-13 19:18:50.042886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.312  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.312  [2024-12-13 19:18:50.050897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.312  [2024-12-13 19:18:50.051079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.312  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.312  [2024-12-13 19:18:50.058851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.312  [2024-12-13 19:18:50.059024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.312  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.312  [2024-12-13 19:18:50.070884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.312  [2024-12-13 19:18:50.071040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.312  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.312  [2024-12-13 19:18:50.082935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.312  [2024-12-13 19:18:50.083108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.312  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.312  [2024-12-13 19:18:50.094891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.312  [2024-12-13 19:18:50.095063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.312  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.312  [2024-12-13 19:18:50.106877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.312  [2024-12-13 19:18:50.106906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.312  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.312  [2024-12-13 19:18:50.118856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.312  [2024-12-13 19:18:50.118883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.312  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.312  [2024-12-13 19:18:50.130845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.312  [2024-12-13 19:18:50.130871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.572  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.572  [2024-12-13 19:18:50.138905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.572  [2024-12-13 19:18:50.138946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.572  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.572  [2024-12-13 19:18:50.146831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.572  [2024-12-13 19:18:50.146857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.572  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.572  [2024-12-13 19:18:50.154845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.572  [2024-12-13 19:18:50.154872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.572  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.572  [2024-12-13 19:18:50.166831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.572  [2024-12-13 19:18:50.166874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.572  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.572  [2024-12-13 19:18:50.178852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.572  [2024-12-13 19:18:50.178898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.572  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.572  [2024-12-13 19:18:50.186835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.572  [2024-12-13 19:18:50.186878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.572  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.572  [2024-12-13 19:18:50.194848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.572  [2024-12-13 19:18:50.194874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.572  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.572  [2024-12-13 19:18:50.202846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.572  [2024-12-13 19:18:50.202872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.572  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.572  [2024-12-13 19:18:50.210849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.572  [2024-12-13 19:18:50.210876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.572  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.572  [2024-12-13 19:18:50.222847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.572  [2024-12-13 19:18:50.222892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.573  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.573  [2024-12-13 19:18:50.230828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.573  [2024-12-13 19:18:50.230853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.573  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.573  [2024-12-13 19:18:50.238833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.573  [2024-12-13 19:18:50.238857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.573  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.573  [2024-12-13 19:18:50.246846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.573  [2024-12-13 19:18:50.246871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.573  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.573  [2024-12-13 19:18:50.254856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.573  [2024-12-13 19:18:50.254885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.573  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.573  [2024-12-13 19:18:50.266866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.573  [2024-12-13 19:18:50.266909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.573  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.573  [2024-12-13 19:18:50.278845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.573  [2024-12-13 19:18:50.278887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.573  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.573  [2024-12-13 19:18:50.286838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:34:18.573  [2024-12-13 19:18:50.286865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:18.573  2024/12/13 19:18:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:18.573  /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (126796) - No such process
00:34:18.573   19:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 126796
00:34:18.573   19:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:34:18.573   19:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:18.573   19:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:34:18.573   19:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:18.573   19:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:34:18.573   19:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:18.573   19:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:34:18.573  delay0
00:34:18.573   19:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:18.573   19:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1
00:34:18.573   19:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:18.573   19:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:34:18.573   19:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:18.573   19:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1'
00:34:18.832  [2024-12-13 19:18:50.475684] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral
00:34:26.948  Initializing NVMe Controllers
00:34:26.948  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:34:26.948  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:34:26.948  Initialization complete. Launching workers.
00:34:26.948  NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 214, failed: 32684
00:34:26.948  CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 32739, failed to submit 159
00:34:26.948  	 success 32684, unsuccessful 55, failed 0
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20}
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:34:26.948  rmmod nvme_tcp
00:34:26.948  rmmod nvme_fabrics
00:34:26.948  rmmod nvme_keyring
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 126651 ']'
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 126651
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 126651 ']'
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 126651
00:34:26.948    19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:34:26.948    19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126651
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:34:26.948  killing process with pid 126651
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126651'
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 126651
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 126651
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:34:26.948   19:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:34:26.948   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:34:26.948   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns
00:34:26.948   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:34:26.948   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:34:26.948    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:34:26.948   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0
00:34:26.948  
00:34:26.948  real	0m25.362s
00:34:26.948  user	0m37.099s
00:34:26.948  sys	0m9.734s
00:34:26.948   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable
00:34:26.948  ************************************
00:34:26.948  END TEST nvmf_zcopy
00:34:26.948  ************************************
00:34:26.948   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:34:26.948   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode
00:34:26.948   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:34:26.948   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:34:26.948   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:34:26.948  ************************************
00:34:26.948  START TEST nvmf_nmic
00:34:26.948  ************************************
00:34:26.948   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode
00:34:26.948  * Looking for test storage...
00:34:26.948  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:34:26.948    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:34:26.948     19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:34:26.948     19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version
00:34:26.948    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:34:26.948    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:34:26.948    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l
00:34:26.948    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l
00:34:26.948    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-:
00:34:26.948    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1
00:34:26.948    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-:
00:34:26.948    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2
00:34:26.948    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<'
00:34:26.948    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2
00:34:26.948    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1
00:34:26.948    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:34:26.948    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in
00:34:26.948    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1
00:34:26.948    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 ))
00:34:26.948    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:34:26.948     19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1
00:34:26.949     19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1
00:34:26.949     19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:34:26.949     19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1
00:34:26.949     19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2
00:34:26.949     19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2
00:34:26.949     19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:34:26.949     19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:34:26.949  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:26.949  		--rc genhtml_branch_coverage=1
00:34:26.949  		--rc genhtml_function_coverage=1
00:34:26.949  		--rc genhtml_legend=1
00:34:26.949  		--rc geninfo_all_blocks=1
00:34:26.949  		--rc geninfo_unexecuted_blocks=1
00:34:26.949  		
00:34:26.949  		'
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:34:26.949  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:26.949  		--rc genhtml_branch_coverage=1
00:34:26.949  		--rc genhtml_function_coverage=1
00:34:26.949  		--rc genhtml_legend=1
00:34:26.949  		--rc geninfo_all_blocks=1
00:34:26.949  		--rc geninfo_unexecuted_blocks=1
00:34:26.949  		
00:34:26.949  		'
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:34:26.949  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:26.949  		--rc genhtml_branch_coverage=1
00:34:26.949  		--rc genhtml_function_coverage=1
00:34:26.949  		--rc genhtml_legend=1
00:34:26.949  		--rc geninfo_all_blocks=1
00:34:26.949  		--rc geninfo_unexecuted_blocks=1
00:34:26.949  		
00:34:26.949  		'
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:34:26.949  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:26.949  		--rc genhtml_branch_coverage=1
00:34:26.949  		--rc genhtml_function_coverage=1
00:34:26.949  		--rc genhtml_legend=1
00:34:26.949  		--rc geninfo_all_blocks=1
00:34:26.949  		--rc geninfo_unexecuted_blocks=1
00:34:26.949  		
00:34:26.949  		'
00:34:26.949   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:34:26.949     19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:34:26.949     19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:34:26.949     19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob
00:34:26.949     19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:34:26.949     19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:34:26.949     19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:34:26.949      19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:26.949      19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:26.949      19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:26.949      19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH
00:34:26.949      19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0
00:34:26.949   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64
00:34:26.949   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:34:26.949   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit
00:34:26.949   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:34:26.949   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:34:26.949   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs
00:34:26.949   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no
00:34:26.949   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns
00:34:26.949   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:34:26.949   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:34:26.949    19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:34:26.949   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:34:26.949   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:34:26.949   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:34:26.949   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:34:26.949   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:34:26.949   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init
00:34:26.949   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:34:26.949   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:34:26.949   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:34:26.949   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:34:26.950  Cannot find device "nvmf_init_br"
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # true
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:34:26.950  Cannot find device "nvmf_init_br2"
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # true
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:34:26.950  Cannot find device "nvmf_tgt_br"
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # true
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:34:26.950  Cannot find device "nvmf_tgt_br2"
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # true
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:34:26.950  Cannot find device "nvmf_init_br"
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # true
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:34:26.950  Cannot find device "nvmf_init_br2"
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # true
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:34:26.950  Cannot find device "nvmf_tgt_br"
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # true
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:34:26.950  Cannot find device "nvmf_tgt_br2"
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # true
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:34:26.950  Cannot find device "nvmf_br"
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # true
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:34:26.950  Cannot find device "nvmf_init_if"
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # true
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:34:26.950  Cannot find device "nvmf_init_if2"
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # true
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:34:26.950  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # true
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:34:26.950  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # true
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:34:26.950  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:34:26.950  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms
00:34:26.950  
00:34:26.950  --- 10.0.0.3 ping statistics ---
00:34:26.950  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:26.950  rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:34:26.950  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:34:26.950  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms
00:34:26.950  
00:34:26.950  --- 10.0.0.4 ping statistics ---
00:34:26.950  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:26.950  rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:34:26.950  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:34:26.950  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms
00:34:26.950  
00:34:26.950  --- 10.0.0.1 ping statistics ---
00:34:26.950  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:26.950  rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:34:26.950  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:34:26.950  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms
00:34:26.950  
00:34:26.950  --- 10.0.0.2 ping statistics ---
00:34:26.950  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:26.950  rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@461 -- # return 0
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:34:26.950   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:34:26.951   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF
00:34:26.951   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:34:26.951   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable
00:34:26.951   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:34:26.951   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=127172
00:34:26.951   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 127172
00:34:26.951   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 127172 ']'
00:34:26.951   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:34:26.951   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100
00:34:26.951   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:34:26.951  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:34:26.951   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable
00:34:26.951   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:34:26.951   19:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF
00:34:27.210  [2024-12-13 19:18:58.818649] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:34:27.210  [2024-12-13 19:18:58.819693] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:34:27.210  [2024-12-13 19:18:58.819761] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:34:27.210  [2024-12-13 19:18:58.958715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:34:27.210  [2024-12-13 19:18:59.003331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:34:27.210  [2024-12-13 19:18:59.003404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:34:27.210  [2024-12-13 19:18:59.003416] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:34:27.210  [2024-12-13 19:18:59.003423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:34:27.210  [2024-12-13 19:18:59.003430] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:34:27.210  [2024-12-13 19:18:59.004785] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:34:27.210  [2024-12-13 19:18:59.004941] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:34:27.210  [2024-12-13 19:18:59.005079] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:34:27.210  [2024-12-13 19:18:59.005080] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:34:27.469  [2024-12-13 19:18:59.130765] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:34:27.469  [2024-12-13 19:18:59.131068] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:34:27.469  [2024-12-13 19:18:59.131827] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:34:27.469  [2024-12-13 19:18:59.131874] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:34:27.469  [2024-12-13 19:18:59.132079] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode.
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:34:27.469  [2024-12-13 19:18:59.214423] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:34:27.469  Malloc0
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:27.469   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:34:27.469  [2024-12-13 19:18:59.290723] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems'
00:34:27.728  test case1: single bdev can't be used in multiple subsystems
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:34:27.728  [2024-12-13 19:18:59.314432] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target
00:34:27.728  [2024-12-13 19:18:59.314480] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1
00:34:27.728  [2024-12-13 19:18:59.314491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:34:27.728  2024/12/13 19:18:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters
00:34:27.728  request:
00:34:27.728  {
00:34:27.728  "method": "nvmf_subsystem_add_ns",
00:34:27.728  "params": {
00:34:27.728  "nqn": "nqn.2016-06.io.spdk:cnode2",
00:34:27.728  "namespace": {
00:34:27.728  "bdev_name": "Malloc0",
00:34:27.728  "no_auto_visible": false,
00:34:27.728  "hide_metadata": false
00:34:27.728  }
00:34:27.728  }
00:34:27.728  }
00:34:27.728  Got JSON-RPC error response
00:34:27.728  GoRPCClient: error on JSON-RPC call
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']'
00:34:27.728   Adding namespace failed - expected result.
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.'
00:34:27.728  test case2: host connect to nvmf target in multiple paths
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths'
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:34:27.728  [2024-12-13 19:18:59.326520] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 ***
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:34:27.728   19:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2
00:34:30.258   19:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:34:30.258    19:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:34:30.258    19:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:34:30.258   19:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:34:30.258   19:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:34:30.258   19:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0
00:34:30.258   19:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v
00:34:30.258  [global]
00:34:30.258  thread=1
00:34:30.258  invalidate=1
00:34:30.258  rw=write
00:34:30.258  time_based=1
00:34:30.258  runtime=1
00:34:30.258  ioengine=libaio
00:34:30.258  direct=1
00:34:30.258  bs=4096
00:34:30.258  iodepth=1
00:34:30.258  norandommap=0
00:34:30.258  numjobs=1
00:34:30.258  
00:34:30.258  verify_dump=1
00:34:30.258  verify_backlog=512
00:34:30.258  verify_state_save=0
00:34:30.258  do_verify=1
00:34:30.258  verify=crc32c-intel
00:34:30.258  [job0]
00:34:30.258  filename=/dev/nvme0n1
00:34:30.258  Could not set queue depth (nvme0n1)
00:34:30.258  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:34:30.258  fio-3.35
00:34:30.258  Starting 1 thread
00:34:31.258  
00:34:31.258  job0: (groupid=0, jobs=1): err= 0: pid=127263: Fri Dec 13 19:19:02 2024
00:34:31.258    read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec)
00:34:31.258      slat (nsec): min=11857, max=76739, avg=14485.03, stdev=4205.36
00:34:31.258      clat (usec): min=158, max=619, avg=192.38, stdev=25.32
00:34:31.258       lat (usec): min=171, max=638, avg=206.87, stdev=25.99
00:34:31.258      clat percentiles (usec):
00:34:31.258       |  1.00th=[  167],  5.00th=[  172], 10.00th=[  174], 20.00th=[  178],
00:34:31.258       | 30.00th=[  182], 40.00th=[  184], 50.00th=[  188], 60.00th=[  192],
00:34:31.258       | 70.00th=[  198], 80.00th=[  204], 90.00th=[  215], 95.00th=[  225],
00:34:31.258       | 99.00th=[  247], 99.50th=[  306], 99.90th=[  519], 99.95th=[  611],
00:34:31.258       | 99.99th=[  619]
00:34:31.258    write: IOPS=3042, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1001msec); 0 zone resets
00:34:31.258      slat (nsec): min=16272, max=98415, avg=21432.35, stdev=6867.70
00:34:31.258      clat (usec): min=104, max=713, avg=130.30, stdev=22.94
00:34:31.258       lat (usec): min=122, max=737, avg=151.73, stdev=24.77
00:34:31.258      clat percentiles (usec):
00:34:31.258       |  1.00th=[  110],  5.00th=[  114], 10.00th=[  116], 20.00th=[  119],
00:34:31.258       | 30.00th=[  121], 40.00th=[  123], 50.00th=[  126], 60.00th=[  129],
00:34:31.258       | 70.00th=[  133], 80.00th=[  141], 90.00th=[  151], 95.00th=[  159],
00:34:31.258       | 99.00th=[  182], 99.50th=[  235], 99.90th=[  388], 99.95th=[  424],
00:34:31.258       | 99.99th=[  717]
00:34:31.258     bw (  KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1
00:34:31.258     iops        : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1
00:34:31.258    lat (usec)   : 250=99.34%, 500=0.59%, 750=0.07%
00:34:31.258    cpu          : usr=2.30%, sys=7.10%, ctx=5606, majf=0, minf=5
00:34:31.258    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:34:31.258       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:31.258       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:31.258       issued rwts: total=2560,3046,0,0 short=0,0,0,0 dropped=0,0,0,0
00:34:31.258       latency   : target=0, window=0, percentile=100.00%, depth=1
00:34:31.258  
00:34:31.258  Run status group 0 (all jobs):
00:34:31.258     READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec
00:34:31.258    WRITE: bw=11.9MiB/s (12.5MB/s), 11.9MiB/s-11.9MiB/s (12.5MB/s-12.5MB/s), io=11.9MiB (12.5MB), run=1001-1001msec
00:34:31.258  
00:34:31.258  Disk stats (read/write):
00:34:31.258    nvme0n1: ios=2468/2560, merge=0/0, ticks=498/355, in_queue=853, util=91.28%
00:34:31.258   19:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:34:31.258  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s)
00:34:31.258   19:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:34:31.258   19:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0
00:34:31.258   19:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:34:31.258   19:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:34:31.258   19:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:34:31.258   19:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:34:31.258   19:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0
00:34:31.258   19:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT
00:34:31.258   19:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini
00:34:31.258   19:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup
00:34:31.258   19:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync
00:34:31.258   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:34:31.258   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e
00:34:31.258   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20}
00:34:31.258   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:34:31.258  rmmod nvme_tcp
00:34:31.258  rmmod nvme_fabrics
00:34:31.258  rmmod nvme_keyring
00:34:31.258   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:34:31.258   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e
00:34:31.258   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0
00:34:31.517   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 127172 ']'
00:34:31.517   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 127172
00:34:31.517   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 127172 ']'
00:34:31.517   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 127172
00:34:31.517    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname
00:34:31.517   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:34:31.517    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 127172
00:34:31.517   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:34:31.517   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:34:31.517  killing process with pid 127172
00:34:31.517   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 127172'
00:34:31.517   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 127172
00:34:31.517   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 127172
00:34:31.776   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:34:31.776   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:34:31.776   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:34:31.776   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr
00:34:31.776   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save
00:34:31.776   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:34:31.776   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore
00:34:31.776   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:34:31.776   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:34:31.776   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:34:31.776   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:34:31.776   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:34:31.776   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:34:31.776   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:34:31.776   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:34:31.776   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:34:31.776   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:34:31.776   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:34:31.776   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:34:31.776   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:34:31.776   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:34:32.035   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:34:32.035   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns
00:34:32.035   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:34:32.035   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:34:32.035    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:34:32.035   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # return 0
00:34:32.035  
00:34:32.035  real	0m5.555s
00:34:32.035  user	0m15.336s
00:34:32.035  sys	0m1.803s
00:34:32.035   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable
00:34:32.035   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:34:32.035  ************************************
00:34:32.035  END TEST nvmf_nmic
00:34:32.035  ************************************
00:34:32.035   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode
00:34:32.035   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:34:32.035   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:34:32.035   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:34:32.035  ************************************
00:34:32.035  START TEST nvmf_fio_target
00:34:32.035  ************************************
00:34:32.035   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode
00:34:32.035  * Looking for test storage...
00:34:32.035  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:34:32.035    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:34:32.035     19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version
00:34:32.035     19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-:
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-:
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<'
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 ))
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:34:32.295     19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1
00:34:32.295     19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1
00:34:32.295     19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:34:32.295     19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1
00:34:32.295     19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2
00:34:32.295     19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2
00:34:32.295     19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:34:32.295     19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:34:32.295  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:32.295  		--rc genhtml_branch_coverage=1
00:34:32.295  		--rc genhtml_function_coverage=1
00:34:32.295  		--rc genhtml_legend=1
00:34:32.295  		--rc geninfo_all_blocks=1
00:34:32.295  		--rc geninfo_unexecuted_blocks=1
00:34:32.295  		
00:34:32.295  		'
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:34:32.295  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:32.295  		--rc genhtml_branch_coverage=1
00:34:32.295  		--rc genhtml_function_coverage=1
00:34:32.295  		--rc genhtml_legend=1
00:34:32.295  		--rc geninfo_all_blocks=1
00:34:32.295  		--rc geninfo_unexecuted_blocks=1
00:34:32.295  		
00:34:32.295  		'
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:34:32.295  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:32.295  		--rc genhtml_branch_coverage=1
00:34:32.295  		--rc genhtml_function_coverage=1
00:34:32.295  		--rc genhtml_legend=1
00:34:32.295  		--rc geninfo_all_blocks=1
00:34:32.295  		--rc geninfo_unexecuted_blocks=1
00:34:32.295  		
00:34:32.295  		'
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:34:32.295  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:32.295  		--rc genhtml_branch_coverage=1
00:34:32.295  		--rc genhtml_function_coverage=1
00:34:32.295  		--rc genhtml_legend=1
00:34:32.295  		--rc geninfo_all_blocks=1
00:34:32.295  		--rc geninfo_unexecuted_blocks=1
00:34:32.295  		
00:34:32.295  		'
00:34:32.295   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:34:32.295     19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:34:32.295     19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:34:32.295    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:34:32.295     19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob
00:34:32.295     19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:34:32.295     19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:34:32.295     19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:34:32.295      19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:32.296      19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:32.296      19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:32.296      19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH
00:34:32.296      19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:32.296    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0
00:34:32.296    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:34:32.296    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:34:32.296    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:34:32.296    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:34:32.296    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:34:32.296    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:34:32.296    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:34:32.296    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:34:32.296    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:34:32.296    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:34:32.296    19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:34:32.296  Cannot find device "nvmf_init_br"
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # true
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:34:32.296  Cannot find device "nvmf_init_br2"
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # true
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:34:32.296  Cannot find device "nvmf_tgt_br"
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # true
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:34:32.296  Cannot find device "nvmf_tgt_br2"
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # true
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:34:32.296  Cannot find device "nvmf_init_br"
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # true
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:34:32.296  Cannot find device "nvmf_init_br2"
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # true
00:34:32.296   19:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:34:32.296  Cannot find device "nvmf_tgt_br"
00:34:32.296   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # true
00:34:32.296   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:34:32.296  Cannot find device "nvmf_tgt_br2"
00:34:32.296   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # true
00:34:32.296   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:34:32.296  Cannot find device "nvmf_br"
00:34:32.296   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # true
00:34:32.296   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:34:32.296  Cannot find device "nvmf_init_if"
00:34:32.296   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # true
00:34:32.296   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:34:32.296  Cannot find device "nvmf_init_if2"
00:34:32.296   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # true
00:34:32.296   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:34:32.296  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:34:32.296   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # true
00:34:32.296   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:34:32.296  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:34:32.296   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # true
00:34:32.296   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:34:32.296   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:34:32.296   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:34:32.297   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:34:32.297   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:34:32.556  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:34:32.556  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms
00:34:32.556  
00:34:32.556  --- 10.0.0.3 ping statistics ---
00:34:32.556  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:32.556  rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:34:32.556  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:34:32.556  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms
00:34:32.556  
00:34:32.556  --- 10.0.0.4 ping statistics ---
00:34:32.556  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:32.556  rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:34:32.556  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:34:32.556  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms
00:34:32.556  
00:34:32.556  --- 10.0.0.1 ping statistics ---
00:34:32.556  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:32.556  rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:34:32.556  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:34:32.556  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms
00:34:32.556  
00:34:32.556  --- 10.0.0.2 ping statistics ---
00:34:32.556  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:32.556  rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=127502
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 127502
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 127502 ']'
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:34:32.556  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:34:32.556   19:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:34:32.815  [2024-12-13 19:19:04.443005] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:34:32.816  [2024-12-13 19:19:04.444381] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:34:32.816  [2024-12-13 19:19:04.444449] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:34:32.816  [2024-12-13 19:19:04.591078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:34:32.816  [2024-12-13 19:19:04.632622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:34:32.816  [2024-12-13 19:19:04.632691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:34:32.816  [2024-12-13 19:19:04.632702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:34:32.816  [2024-12-13 19:19:04.632709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:34:32.816  [2024-12-13 19:19:04.632715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:34:32.816  [2024-12-13 19:19:04.634167] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:34:32.816  [2024-12-13 19:19:04.634327] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:34:32.816  [2024-12-13 19:19:04.634404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:34:32.816  [2024-12-13 19:19:04.634404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:34:33.075  [2024-12-13 19:19:04.760090] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:34:33.075  [2024-12-13 19:19:04.760427] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:34:33.075  [2024-12-13 19:19:04.761177] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:34:33.075  [2024-12-13 19:19:04.761207] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:34:33.075  [2024-12-13 19:19:04.761428] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode.
00:34:33.642   19:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:34:33.642   19:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0
00:34:33.642   19:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:34:33.642   19:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable
00:34:33.642   19:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:34:33.642   19:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:34:33.642   19:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:34:33.900  [2024-12-13 19:19:05.703563] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:34:34.158    19:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:34:34.417   19:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 '
00:34:34.417    19:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:34:34.675   19:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1
00:34:34.675    19:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:34:34.934   19:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 '
00:34:34.934    19:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:34:35.193   19:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3
00:34:35.193   19:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3'
00:34:35.450    19:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:34:35.708   19:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 '
00:34:35.708    19:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:34:35.967   19:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 '
00:34:35.967    19:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:34:36.225   19:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6
00:34:36.225   19:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6'
00:34:36.483   19:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:34:36.741   19:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs
00:34:36.741   19:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:34:36.999   19:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs
00:34:36.999   19:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:34:37.257   19:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:34:37.516  [2024-12-13 19:19:09.147608] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:34:37.516   19:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0
00:34:37.774   19:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0
00:34:38.033   19:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420
00:34:38.033   19:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4
00:34:38.033   19:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0
00:34:38.033   19:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:34:38.033   19:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]]
00:34:38.033   19:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4
00:34:38.033   19:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2
00:34:39.934   19:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:34:39.934    19:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:34:39.934    19:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:34:39.934   19:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4
00:34:39.934   19:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:34:39.934   19:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0
00:34:39.934   19:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v
00:34:39.934  [global]
00:34:39.934  thread=1
00:34:39.934  invalidate=1
00:34:39.934  rw=write
00:34:39.934  time_based=1
00:34:39.934  runtime=1
00:34:39.934  ioengine=libaio
00:34:39.934  direct=1
00:34:39.934  bs=4096
00:34:39.934  iodepth=1
00:34:39.934  norandommap=0
00:34:39.934  numjobs=1
00:34:39.934  
00:34:39.934  verify_dump=1
00:34:39.934  verify_backlog=512
00:34:39.934  verify_state_save=0
00:34:39.934  do_verify=1
00:34:39.934  verify=crc32c-intel
00:34:39.934  [job0]
00:34:39.934  filename=/dev/nvme0n1
00:34:39.934  [job1]
00:34:39.934  filename=/dev/nvme0n2
00:34:39.934  [job2]
00:34:39.934  filename=/dev/nvme0n3
00:34:39.934  [job3]
00:34:39.934  filename=/dev/nvme0n4
00:34:40.193  Could not set queue depth (nvme0n1)
00:34:40.193  Could not set queue depth (nvme0n2)
00:34:40.193  Could not set queue depth (nvme0n3)
00:34:40.193  Could not set queue depth (nvme0n4)
00:34:40.193  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:34:40.193  job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:34:40.193  job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:34:40.193  job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:34:40.193  fio-3.35
00:34:40.193  Starting 4 threads
00:34:41.569  
00:34:41.569  job0: (groupid=0, jobs=1): err= 0: pid=127783: Fri Dec 13 19:19:13 2024
00:34:41.569    read: IOPS=1167, BW=4671KiB/s (4783kB/s)(4676KiB/1001msec)
00:34:41.569      slat (nsec): min=9510, max=59469, avg=15433.26, stdev=5175.56
00:34:41.569      clat (usec): min=183, max=2096, avg=409.63, stdev=98.53
00:34:41.569       lat (usec): min=194, max=2114, avg=425.06, stdev=98.99
00:34:41.569      clat percentiles (usec):
00:34:41.569       |  1.00th=[  212],  5.00th=[  265], 10.00th=[  334], 20.00th=[  359],
00:34:41.569       | 30.00th=[  375], 40.00th=[  388], 50.00th=[  404], 60.00th=[  416],
00:34:41.569       | 70.00th=[  433], 80.00th=[  453], 90.00th=[  498], 95.00th=[  545],
00:34:41.569       | 99.00th=[  635], 99.50th=[  676], 99.90th=[ 1532], 99.95th=[ 2089],
00:34:41.569       | 99.99th=[ 2089]
00:34:41.569    write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets
00:34:41.569      slat (nsec): min=11406, max=78548, avg=24957.38, stdev=7952.54
00:34:41.569      clat (usec): min=154, max=3452, avg=299.67, stdev=96.28
00:34:41.569       lat (usec): min=192, max=3518, avg=324.63, stdev=97.71
00:34:41.569      clat percentiles (usec):
00:34:41.569       |  1.00th=[  204],  5.00th=[  235], 10.00th=[  245], 20.00th=[  258],
00:34:41.569       | 30.00th=[  269], 40.00th=[  277], 50.00th=[  289], 60.00th=[  297],
00:34:41.569       | 70.00th=[  314], 80.00th=[  338], 90.00th=[  371], 95.00th=[  396],
00:34:41.569       | 99.00th=[  445], 99.50th=[  490], 99.90th=[  725], 99.95th=[ 3458],
00:34:41.569       | 99.99th=[ 3458]
00:34:41.569     bw (  KiB/s): min= 6608, max= 6608, per=23.07%, avg=6608.00, stdev= 0.00, samples=1
00:34:41.569     iops        : min= 1652, max= 1652, avg=1652.00, stdev= 0.00, samples=1
00:34:41.569    lat (usec)   : 250=9.91%, 500=85.55%, 750=4.36%, 1000=0.07%
00:34:41.569    lat (msec)   : 2=0.04%, 4=0.07%
00:34:41.569    cpu          : usr=1.50%, sys=4.10%, ctx=2706, majf=0, minf=13
00:34:41.569    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:34:41.569       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:41.569       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:41.569       issued rwts: total=1169,1536,0,0 short=0,0,0,0 dropped=0,0,0,0
00:34:41.569       latency   : target=0, window=0, percentile=100.00%, depth=1
00:34:41.569  job1: (groupid=0, jobs=1): err= 0: pid=127784: Fri Dec 13 19:19:13 2024
00:34:41.569    read: IOPS=1562, BW=6250KiB/s (6400kB/s)(6256KiB/1001msec)
00:34:41.569      slat (nsec): min=14922, max=64671, avg=21289.75, stdev=5284.19
00:34:41.569      clat (usec): min=178, max=695, avg=279.74, stdev=35.24
00:34:41.569       lat (usec): min=198, max=713, avg=301.03, stdev=35.55
00:34:41.569      clat percentiles (usec):
00:34:41.569       |  1.00th=[  212],  5.00th=[  227], 10.00th=[  239], 20.00th=[  251],
00:34:41.569       | 30.00th=[  262], 40.00th=[  269], 50.00th=[  277], 60.00th=[  285],
00:34:41.569       | 70.00th=[  293], 80.00th=[  310], 90.00th=[  326], 95.00th=[  338],
00:34:41.569       | 99.00th=[  375], 99.50th=[  383], 99.90th=[  461], 99.95th=[  693],
00:34:41.569       | 99.99th=[  693]
00:34:41.569    write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets
00:34:41.569      slat (usec): min=19, max=100, avg=31.23, stdev= 7.78
00:34:41.569      clat (usec): min=124, max=395, avg=223.50, stdev=33.44
00:34:41.569       lat (usec): min=149, max=445, avg=254.73, stdev=34.91
00:34:41.569      clat percentiles (usec):
00:34:41.569       |  1.00th=[  151],  5.00th=[  169], 10.00th=[  182], 20.00th=[  196],
00:34:41.569       | 30.00th=[  206], 40.00th=[  215], 50.00th=[  223], 60.00th=[  231],
00:34:41.569       | 70.00th=[  241], 80.00th=[  251], 90.00th=[  265], 95.00th=[  277],
00:34:41.569       | 99.00th=[  306], 99.50th=[  330], 99.90th=[  371], 99.95th=[  375],
00:34:41.569       | 99.99th=[  396]
00:34:41.569     bw (  KiB/s): min= 8192, max= 8192, per=28.60%, avg=8192.00, stdev= 0.00, samples=1
00:34:41.569     iops        : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1
00:34:41.569    lat (usec)   : 250=53.10%, 500=46.87%, 750=0.03%
00:34:41.569    cpu          : usr=1.80%, sys=7.10%, ctx=3612, majf=0, minf=5
00:34:41.569    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:34:41.569       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:41.569       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:41.569       issued rwts: total=1564,2048,0,0 short=0,0,0,0 dropped=0,0,0,0
00:34:41.569       latency   : target=0, window=0, percentile=100.00%, depth=1
00:34:41.569  job2: (groupid=0, jobs=1): err= 0: pid=127785: Fri Dec 13 19:19:13 2024
00:34:41.569    read: IOPS=1613, BW=6454KiB/s (6608kB/s)(6460KiB/1001msec)
00:34:41.569      slat (nsec): min=17251, max=67574, avg=21292.89, stdev=4900.68
00:34:41.569      clat (usec): min=163, max=940, avg=274.50, stdev=37.29
00:34:41.569       lat (usec): min=186, max=963, avg=295.80, stdev=37.74
00:34:41.569      clat percentiles (usec):
00:34:41.569       |  1.00th=[  200],  5.00th=[  223], 10.00th=[  235], 20.00th=[  247],
00:34:41.569       | 30.00th=[  255], 40.00th=[  265], 50.00th=[  273], 60.00th=[  281],
00:34:41.569       | 70.00th=[  289], 80.00th=[  302], 90.00th=[  318], 95.00th=[  334],
00:34:41.569       | 99.00th=[  359], 99.50th=[  371], 99.90th=[  388], 99.95th=[  938],
00:34:41.569       | 99.99th=[  938]
00:34:41.569    write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets
00:34:41.569      slat (usec): min=23, max=114, avg=31.69, stdev= 7.95
00:34:41.569      clat (usec): min=126, max=757, avg=219.55, stdev=36.57
00:34:41.569       lat (usec): min=153, max=782, avg=251.24, stdev=38.20
00:34:41.569      clat percentiles (usec):
00:34:41.569       |  1.00th=[  139],  5.00th=[  159], 10.00th=[  174], 20.00th=[  192],
00:34:41.569       | 30.00th=[  204], 40.00th=[  212], 50.00th=[  221], 60.00th=[  229],
00:34:41.569       | 70.00th=[  237], 80.00th=[  247], 90.00th=[  262], 95.00th=[  273],
00:34:41.569       | 99.00th=[  302], 99.50th=[  314], 99.90th=[  375], 99.95th=[  375],
00:34:41.569       | 99.99th=[  758]
00:34:41.569     bw (  KiB/s): min= 8192, max= 8192, per=28.60%, avg=8192.00, stdev= 0.00, samples=1
00:34:41.569     iops        : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1
00:34:41.569    lat (usec)   : 250=56.40%, 500=43.54%, 1000=0.05%
00:34:41.569    cpu          : usr=1.80%, sys=7.30%, ctx=3665, majf=0, minf=9
00:34:41.569    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:34:41.569       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:41.569       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:41.569       issued rwts: total=1615,2048,0,0 short=0,0,0,0 dropped=0,0,0,0
00:34:41.569       latency   : target=0, window=0, percentile=100.00%, depth=1
00:34:41.569  job3: (groupid=0, jobs=1): err= 0: pid=127787: Fri Dec 13 19:19:13 2024
00:34:41.570    read: IOPS=1176, BW=4707KiB/s (4820kB/s)(4712KiB/1001msec)
00:34:41.570      slat (nsec): min=9605, max=71327, avg=15516.79, stdev=5281.75
00:34:41.570      clat (usec): min=198, max=2146, avg=409.48, stdev=98.77
00:34:41.570       lat (usec): min=209, max=2161, avg=424.99, stdev=99.61
00:34:41.570      clat percentiles (usec):
00:34:41.570       |  1.00th=[  219],  5.00th=[  273], 10.00th=[  330], 20.00th=[  359],
00:34:41.570       | 30.00th=[  375], 40.00th=[  388], 50.00th=[  400], 60.00th=[  416],
00:34:41.570       | 70.00th=[  433], 80.00th=[  457], 90.00th=[  502], 95.00th=[  545],
00:34:41.570       | 99.00th=[  627], 99.50th=[  668], 99.90th=[ 1500], 99.95th=[ 2147],
00:34:41.570       | 99.99th=[ 2147]
00:34:41.570    write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets
00:34:41.570      slat (nsec): min=11674, max=76418, avg=24356.19, stdev=8588.38
00:34:41.570      clat (usec): min=169, max=744, avg=297.86, stdev=51.00
00:34:41.570       lat (usec): min=193, max=774, avg=322.22, stdev=52.21
00:34:41.570      clat percentiles (usec):
00:34:41.570       |  1.00th=[  210],  5.00th=[  239], 10.00th=[  247], 20.00th=[  260],
00:34:41.570       | 30.00th=[  269], 40.00th=[  277], 50.00th=[  289], 60.00th=[  302],
00:34:41.570       | 70.00th=[  314], 80.00th=[  330], 90.00th=[  371], 95.00th=[  392],
00:34:41.570       | 99.00th=[  449], 99.50th=[  490], 99.90th=[  570], 99.95th=[  742],
00:34:41.570       | 99.99th=[  742]
00:34:41.570     bw (  KiB/s): min= 6696, max= 6696, per=23.38%, avg=6696.00, stdev= 0.00, samples=1
00:34:41.570     iops        : min= 1674, max= 1674, avg=1674.00, stdev= 0.00, samples=1
00:34:41.570    lat (usec)   : 250=8.88%, 500=86.37%, 750=4.57%, 1000=0.11%
00:34:41.570    lat (msec)   : 2=0.04%, 4=0.04%
00:34:41.570    cpu          : usr=1.30%, sys=4.20%, ctx=2716, majf=0, minf=9
00:34:41.570    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:34:41.570       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:41.570       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:41.570       issued rwts: total=1178,1536,0,0 short=0,0,0,0 dropped=0,0,0,0
00:34:41.570       latency   : target=0, window=0, percentile=100.00%, depth=1
00:34:41.570  
00:34:41.570  Run status group 0 (all jobs):
00:34:41.570     READ: bw=21.6MiB/s (22.6MB/s), 4671KiB/s-6454KiB/s (4783kB/s-6608kB/s), io=21.6MiB (22.6MB), run=1001-1001msec
00:34:41.570    WRITE: bw=28.0MiB/s (29.3MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=28.0MiB (29.4MB), run=1001-1001msec
00:34:41.570  
00:34:41.570  Disk stats (read/write):
00:34:41.570    nvme0n1: ios=1074/1308, merge=0/0, ticks=483/398, in_queue=881, util=89.18%
00:34:41.570    nvme0n2: ios=1585/1547, merge=0/0, ticks=472/363, in_queue=835, util=89.17%
00:34:41.570    nvme0n3: ios=1536/1602, merge=0/0, ticks=435/375, in_queue=810, util=89.18%
00:34:41.570    nvme0n4: ios=1024/1320, merge=0/0, ticks=413/387, in_queue=800, util=89.73%
00:34:41.570   19:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v
00:34:41.570  [global]
00:34:41.570  thread=1
00:34:41.570  invalidate=1
00:34:41.570  rw=randwrite
00:34:41.570  time_based=1
00:34:41.570  runtime=1
00:34:41.570  ioengine=libaio
00:34:41.570  direct=1
00:34:41.570  bs=4096
00:34:41.570  iodepth=1
00:34:41.570  norandommap=0
00:34:41.570  numjobs=1
00:34:41.570  
00:34:41.570  verify_dump=1
00:34:41.570  verify_backlog=512
00:34:41.570  verify_state_save=0
00:34:41.570  do_verify=1
00:34:41.570  verify=crc32c-intel
00:34:41.570  [job0]
00:34:41.570  filename=/dev/nvme0n1
00:34:41.570  [job1]
00:34:41.570  filename=/dev/nvme0n2
00:34:41.570  [job2]
00:34:41.570  filename=/dev/nvme0n3
00:34:41.570  [job3]
00:34:41.570  filename=/dev/nvme0n4
00:34:41.570  Could not set queue depth (nvme0n1)
00:34:41.570  Could not set queue depth (nvme0n2)
00:34:41.570  Could not set queue depth (nvme0n3)
00:34:41.570  Could not set queue depth (nvme0n4)
00:34:41.570  job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:34:41.570  job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:34:41.570  job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:34:41.570  job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:34:41.570  fio-3.35
00:34:41.570  Starting 4 threads
00:34:42.955  
00:34:42.955  job0: (groupid=0, jobs=1): err= 0: pid=127839: Fri Dec 13 19:19:14 2024
00:34:42.955    read: IOPS=1654, BW=6617KiB/s (6776kB/s)(6624KiB/1001msec)
00:34:42.955      slat (nsec): min=13121, max=58553, avg=17048.95, stdev=4814.72
00:34:42.955      clat (usec): min=168, max=432, avg=276.34, stdev=31.84
00:34:42.955       lat (usec): min=184, max=449, avg=293.39, stdev=31.89
00:34:42.955      clat percentiles (usec):
00:34:42.955       |  1.00th=[  206],  5.00th=[  227], 10.00th=[  239], 20.00th=[  251],
00:34:42.955       | 30.00th=[  260], 40.00th=[  269], 50.00th=[  273], 60.00th=[  281],
00:34:42.955       | 70.00th=[  293], 80.00th=[  302], 90.00th=[  318], 95.00th=[  330],
00:34:42.955       | 99.00th=[  355], 99.50th=[  363], 99.90th=[  408], 99.95th=[  433],
00:34:42.955       | 99.99th=[  433]
00:34:42.955    write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets
00:34:42.955      slat (nsec): min=18270, max=97090, avg=25163.92, stdev=6447.84
00:34:42.955      clat (usec): min=116, max=1948, avg=223.07, stdev=49.06
00:34:42.955       lat (usec): min=135, max=1970, avg=248.24, stdev=49.39
00:34:42.955      clat percentiles (usec):
00:34:42.955       |  1.00th=[  159],  5.00th=[  184], 10.00th=[  194], 20.00th=[  204],
00:34:42.955       | 30.00th=[  210], 40.00th=[  215], 50.00th=[  221], 60.00th=[  227],
00:34:42.955       | 70.00th=[  233], 80.00th=[  241], 90.00th=[  253], 95.00th=[  265],
00:34:42.955       | 99.00th=[  297], 99.50th=[  334], 99.90th=[  586], 99.95th=[  791],
00:34:42.955       | 99.99th=[ 1942]
00:34:42.955     bw (  KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1
00:34:42.955     iops        : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1
00:34:42.955    lat (usec)   : 250=57.34%, 500=42.55%, 750=0.05%, 1000=0.03%
00:34:42.955    lat (msec)   : 2=0.03%
00:34:42.955    cpu          : usr=1.60%, sys=5.50%, ctx=3704, majf=0, minf=7
00:34:42.955    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:34:42.955       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:42.955       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:42.955       issued rwts: total=1656,2048,0,0 short=0,0,0,0 dropped=0,0,0,0
00:34:42.955       latency   : target=0, window=0, percentile=100.00%, depth=1
00:34:42.955  job1: (groupid=0, jobs=1): err= 0: pid=127840: Fri Dec 13 19:19:14 2024
00:34:42.955    read: IOPS=1635, BW=6541KiB/s (6698kB/s)(6548KiB/1001msec)
00:34:42.955      slat (nsec): min=12950, max=65281, avg=18409.70, stdev=4967.05
00:34:42.955      clat (usec): min=162, max=442, avg=276.80, stdev=36.66
00:34:42.955       lat (usec): min=179, max=459, avg=295.21, stdev=36.74
00:34:42.955      clat percentiles (usec):
00:34:42.955       |  1.00th=[  192],  5.00th=[  221], 10.00th=[  233], 20.00th=[  247],
00:34:42.955       | 30.00th=[  258], 40.00th=[  269], 50.00th=[  273], 60.00th=[  285],
00:34:42.955       | 70.00th=[  293], 80.00th=[  310], 90.00th=[  322], 95.00th=[  338],
00:34:42.955       | 99.00th=[  367], 99.50th=[  375], 99.90th=[  441], 99.95th=[  441],
00:34:42.955       | 99.99th=[  441]
00:34:42.955    write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets
00:34:42.955      slat (usec): min=18, max=159, avg=27.26, stdev= 7.80
00:34:42.955      clat (usec): min=97, max=608, avg=221.94, stdev=30.39
00:34:42.955       lat (usec): min=150, max=630, avg=249.20, stdev=30.72
00:34:42.955      clat percentiles (usec):
00:34:42.955       |  1.00th=[  161],  5.00th=[  182], 10.00th=[  190], 20.00th=[  200],
00:34:42.955       | 30.00th=[  206], 40.00th=[  212], 50.00th=[  219], 60.00th=[  225],
00:34:42.955       | 70.00th=[  233], 80.00th=[  243], 90.00th=[  260], 95.00th=[  269],
00:34:42.955       | 99.00th=[  306], 99.50th=[  355], 99.90th=[  388], 99.95th=[  437],
00:34:42.955       | 99.99th=[  611]
00:34:42.955     bw (  KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1
00:34:42.955     iops        : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1
00:34:42.955    lat (usec)   : 100=0.03%, 250=57.37%, 500=42.58%, 750=0.03%
00:34:42.955    cpu          : usr=1.40%, sys=6.30%, ctx=3686, majf=0, minf=19
00:34:42.955    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:34:42.955       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:42.955       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:42.955       issued rwts: total=1637,2048,0,0 short=0,0,0,0 dropped=0,0,0,0
00:34:42.955       latency   : target=0, window=0, percentile=100.00%, depth=1
00:34:42.955  job2: (groupid=0, jobs=1): err= 0: pid=127841: Fri Dec 13 19:19:14 2024
00:34:42.955    read: IOPS=1648, BW=6593KiB/s (6752kB/s)(6600KiB/1001msec)
00:34:42.955      slat (nsec): min=13204, max=67000, avg=18581.54, stdev=4990.53
00:34:42.955      clat (usec): min=169, max=445, avg=275.49, stdev=29.96
00:34:42.956       lat (usec): min=187, max=462, avg=294.07, stdev=30.19
00:34:42.956      clat percentiles (usec):
00:34:42.956       |  1.00th=[  215],  5.00th=[  233], 10.00th=[  241], 20.00th=[  251],
00:34:42.956       | 30.00th=[  260], 40.00th=[  265], 50.00th=[  273], 60.00th=[  281],
00:34:42.956       | 70.00th=[  289], 80.00th=[  302], 90.00th=[  318], 95.00th=[  330],
00:34:42.956       | 99.00th=[  355], 99.50th=[  363], 99.90th=[  396], 99.95th=[  445],
00:34:42.956       | 99.99th=[  445]
00:34:42.956    write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets
00:34:42.956      slat (nsec): min=18848, max=96757, avg=27217.13, stdev=7110.84
00:34:42.956      clat (usec): min=114, max=657, avg=220.85, stdev=29.36
00:34:42.956       lat (usec): min=134, max=693, avg=248.07, stdev=30.29
00:34:42.956      clat percentiles (usec):
00:34:42.956       |  1.00th=[  163],  5.00th=[  186], 10.00th=[  194], 20.00th=[  202],
00:34:42.956       | 30.00th=[  208], 40.00th=[  212], 50.00th=[  219], 60.00th=[  223],
00:34:42.956       | 70.00th=[  231], 80.00th=[  239], 90.00th=[  253], 95.00th=[  265],
00:34:42.956       | 99.00th=[  297], 99.50th=[  343], 99.90th=[  474], 99.95th=[  537],
00:34:42.956       | 99.99th=[  660]
00:34:42.956     bw (  KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1
00:34:42.957     iops        : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1
00:34:42.957    lat (usec)   : 250=57.65%, 500=42.29%, 750=0.05%
00:34:42.957    cpu          : usr=1.40%, sys=6.40%, ctx=3700, majf=0, minf=10
00:34:42.957    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:34:42.957       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:42.957       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:42.958       issued rwts: total=1650,2048,0,0 short=0,0,0,0 dropped=0,0,0,0
00:34:42.958       latency   : target=0, window=0, percentile=100.00%, depth=1
00:34:42.958  job3: (groupid=0, jobs=1): err= 0: pid=127842: Fri Dec 13 19:19:14 2024
00:34:42.958    read: IOPS=1589, BW=6358KiB/s (6510kB/s)(6364KiB/1001msec)
00:34:42.958      slat (nsec): min=12538, max=66964, avg=17024.72, stdev=4923.08
00:34:42.958      clat (usec): min=183, max=515, avg=281.66, stdev=37.00
00:34:42.958       lat (usec): min=199, max=531, avg=298.69, stdev=37.45
00:34:42.958      clat percentiles (usec):
00:34:42.958       |  1.00th=[  202],  5.00th=[  225], 10.00th=[  239], 20.00th=[  251],
00:34:42.958       | 30.00th=[  262], 40.00th=[  273], 50.00th=[  281], 60.00th=[  289],
00:34:42.958       | 70.00th=[  297], 80.00th=[  310], 90.00th=[  326], 95.00th=[  343],
00:34:42.958       | 99.00th=[  371], 99.50th=[  429], 99.90th=[  506], 99.95th=[  515],
00:34:42.958       | 99.99th=[  515]
00:34:42.958    write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets
00:34:42.958      slat (nsec): min=17510, max=88873, avg=26968.99, stdev=7641.51
00:34:42.958      clat (usec): min=124, max=1759, avg=226.19, stdev=58.88
00:34:42.958       lat (usec): min=145, max=1781, avg=253.16, stdev=59.31
00:34:42.958      clat percentiles (usec):
00:34:42.958       |  1.00th=[  159],  5.00th=[  180], 10.00th=[  190], 20.00th=[  200],
00:34:42.958       | 30.00th=[  208], 40.00th=[  212], 50.00th=[  221], 60.00th=[  227],
00:34:42.958       | 70.00th=[  237], 80.00th=[  245], 90.00th=[  262], 95.00th=[  277],
00:34:42.958       | 99.00th=[  379], 99.50th=[  445], 99.90th=[ 1057], 99.95th=[ 1336],
00:34:42.958       | 99.99th=[ 1762]
00:34:42.958     bw (  KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1
00:34:42.958     iops        : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1
00:34:42.958    lat (usec)   : 250=55.02%, 500=44.74%, 750=0.16%
00:34:42.958    lat (msec)   : 2=0.08%
00:34:42.958    cpu          : usr=1.80%, sys=5.60%, ctx=3639, majf=0, minf=11
00:34:42.958    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:34:42.958       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:42.958       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:42.958       issued rwts: total=1591,2048,0,0 short=0,0,0,0 dropped=0,0,0,0
00:34:42.958       latency   : target=0, window=0, percentile=100.00%, depth=1
00:34:42.958  
00:34:42.958  Run status group 0 (all jobs):
00:34:42.958     READ: bw=25.5MiB/s (26.7MB/s), 6358KiB/s-6617KiB/s (6510kB/s-6776kB/s), io=25.5MiB (26.8MB), run=1001-1001msec
00:34:42.958    WRITE: bw=32.0MiB/s (33.5MB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=32.0MiB (33.6MB), run=1001-1001msec
00:34:42.958  
00:34:42.958  Disk stats (read/write):
00:34:42.958    nvme0n1: ios=1586/1660, merge=0/0, ticks=461/388, in_queue=849, util=88.88%
00:34:42.958    nvme0n2: ios=1585/1632, merge=0/0, ticks=452/383, in_queue=835, util=88.98%
00:34:42.958    nvme0n3: ios=1536/1643, merge=0/0, ticks=434/380, in_queue=814, util=89.19%
00:34:42.958    nvme0n4: ios=1536/1593, merge=0/0, ticks=447/379, in_queue=826, util=89.74%
00:34:42.958   19:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v
00:34:42.958  [global]
00:34:42.958  thread=1
00:34:42.958  invalidate=1
00:34:42.958  rw=write
00:34:42.958  time_based=1
00:34:42.958  runtime=1
00:34:42.958  ioengine=libaio
00:34:42.958  direct=1
00:34:42.958  bs=4096
00:34:42.958  iodepth=128
00:34:42.958  norandommap=0
00:34:42.958  numjobs=1
00:34:42.958  
00:34:42.958  verify_dump=1
00:34:42.958  verify_backlog=512
00:34:42.958  verify_state_save=0
00:34:42.958  do_verify=1
00:34:42.958  verify=crc32c-intel
00:34:42.958  [job0]
00:34:42.958  filename=/dev/nvme0n1
00:34:42.959  [job1]
00:34:42.959  filename=/dev/nvme0n2
00:34:42.959  [job2]
00:34:42.959  filename=/dev/nvme0n3
00:34:42.959  [job3]
00:34:42.959  filename=/dev/nvme0n4
00:34:42.959  Could not set queue depth (nvme0n1)
00:34:42.959  Could not set queue depth (nvme0n2)
00:34:42.959  Could not set queue depth (nvme0n3)
00:34:42.959  Could not set queue depth (nvme0n4)
00:34:42.959  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:34:42.959  job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:34:42.959  job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:34:42.959  job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:34:42.959  fio-3.35
00:34:42.959  Starting 4 threads
00:34:44.337  
00:34:44.337  job0: (groupid=0, jobs=1): err= 0: pid=127903: Fri Dec 13 19:19:15 2024
00:34:44.337    read: IOPS=4640, BW=18.1MiB/s (19.0MB/s)(18.2MiB/1004msec)
00:34:44.337      slat (usec): min=6, max=5916, avg=100.21, stdev=535.69
00:34:44.337      clat (usec): min=2194, max=20103, avg=12945.09, stdev=1913.46
00:34:44.337       lat (usec): min=5890, max=21683, avg=13045.30, stdev=1945.74
00:34:44.337      clat percentiles (usec):
00:34:44.337       |  1.00th=[ 6849],  5.00th=[ 9765], 10.00th=[10683], 20.00th=[11469],
00:34:44.337       | 30.00th=[12125], 40.00th=[12518], 50.00th=[12911], 60.00th=[13566],
00:34:44.337       | 70.00th=[14091], 80.00th=[14484], 90.00th=[15270], 95.00th=[16057],
00:34:44.337       | 99.00th=[17171], 99.50th=[17433], 99.90th=[19268], 99.95th=[19530],
00:34:44.337       | 99.99th=[20055]
00:34:44.337    write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets
00:34:44.337      slat (usec): min=10, max=7601, avg=96.66, stdev=442.70
00:34:44.337      clat (usec): min=6968, max=21278, avg=13020.80, stdev=1602.31
00:34:44.337       lat (usec): min=6993, max=21325, avg=13117.46, stdev=1640.61
00:34:44.337      clat percentiles (usec):
00:34:44.337       |  1.00th=[ 8160],  5.00th=[10552], 10.00th=[11076], 20.00th=[11994],
00:34:44.337       | 30.00th=[12518], 40.00th=[12649], 50.00th=[13042], 60.00th=[13435],
00:34:44.337       | 70.00th=[13698], 80.00th=[13960], 90.00th=[14615], 95.00th=[15926],
00:34:44.337       | 99.00th=[17433], 99.50th=[18220], 99.90th=[19006], 99.95th=[19530],
00:34:44.337       | 99.99th=[21365]
00:34:44.337     bw (  KiB/s): min=19872, max=20439, per=38.46%, avg=20155.50, stdev=400.93, samples=2
00:34:44.337     iops        : min= 4968, max= 5109, avg=5038.50, stdev=99.70, samples=2
00:34:44.337    lat (msec)   : 4=0.01%, 10=4.36%, 20=95.60%, 50=0.03%
00:34:44.337    cpu          : usr=5.08%, sys=13.66%, ctx=561, majf=0, minf=8
00:34:44.337    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4%
00:34:44.337       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:44.337       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:34:44.337       issued rwts: total=4659,5120,0,0 short=0,0,0,0 dropped=0,0,0,0
00:34:44.337       latency   : target=0, window=0, percentile=100.00%, depth=128
00:34:44.337  job1: (groupid=0, jobs=1): err= 0: pid=127904: Fri Dec 13 19:19:15 2024
00:34:44.337    read: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec)
00:34:44.337      slat (usec): min=4, max=9431, avg=306.12, stdev=1287.57
00:34:44.337      clat (usec): min=25500, max=50309, avg=38045.67, stdev=3881.71
00:34:44.337       lat (usec): min=28595, max=51480, avg=38351.78, stdev=3812.88
00:34:44.337      clat percentiles (usec):
00:34:44.337       |  1.00th=[29492],  5.00th=[32113], 10.00th=[32637], 20.00th=[35390],
00:34:44.337       | 30.00th=[36439], 40.00th=[36963], 50.00th=[38011], 60.00th=[39060],
00:34:44.337       | 70.00th=[40109], 80.00th=[41157], 90.00th=[42206], 95.00th=[44827],
00:34:44.337       | 99.00th=[49021], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070],
00:34:44.337       | 99.99th=[50070]
00:34:44.337    write: IOPS=1757, BW=7029KiB/s (7197kB/s)(7092KiB/1009msec); 0 zone resets
00:34:44.337      slat (usec): min=12, max=9621, avg=290.91, stdev=1265.20
00:34:44.337      clat (usec): min=7408, max=51697, avg=38199.07, stdev=5395.20
00:34:44.337       lat (usec): min=8462, max=52629, avg=38489.98, stdev=5278.16
00:34:44.337      clat percentiles (usec):
00:34:44.337       |  1.00th=[16581],  5.00th=[29492], 10.00th=[33424], 20.00th=[36439],
00:34:44.337       | 30.00th=[38011], 40.00th=[38011], 50.00th=[38536], 60.00th=[39060],
00:34:44.337       | 70.00th=[39584], 80.00th=[41681], 90.00th=[42206], 95.00th=[45876],
00:34:44.337       | 99.00th=[49546], 99.50th=[50594], 99.90th=[51643], 99.95th=[51643],
00:34:44.337       | 99.99th=[51643]
00:34:44.337     bw (  KiB/s): min= 5240, max= 7920, per=12.56%, avg=6580.00, stdev=1895.05, samples=2
00:34:44.337     iops        : min= 1310, max= 1980, avg=1645.00, stdev=473.76, samples=2
00:34:44.337    lat (msec)   : 10=0.42%, 20=0.82%, 50=98.01%, 100=0.76%
00:34:44.337    cpu          : usr=1.79%, sys=6.05%, ctx=418, majf=0, minf=19
00:34:44.337    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1%
00:34:44.337       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:44.337       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:34:44.337       issued rwts: total=1536,1773,0,0 short=0,0,0,0 dropped=0,0,0,0
00:34:44.337       latency   : target=0, window=0, percentile=100.00%, depth=128
00:34:44.337  job2: (groupid=0, jobs=1): err= 0: pid=127905: Fri Dec 13 19:19:15 2024
00:34:44.337    read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec)
00:34:44.337      slat (usec): min=6, max=6912, avg=114.63, stdev=611.65
00:34:44.337      clat (usec): min=8513, max=22945, avg=14698.42, stdev=2055.72
00:34:44.337       lat (usec): min=8535, max=23098, avg=14813.05, stdev=2090.13
00:34:44.337      clat percentiles (usec):
00:34:44.337       |  1.00th=[10159],  5.00th=[11338], 10.00th=[12125], 20.00th=[13042],
00:34:44.337       | 30.00th=[13566], 40.00th=[14091], 50.00th=[14484], 60.00th=[15008],
00:34:44.337       | 70.00th=[15664], 80.00th=[16450], 90.00th=[17433], 95.00th=[18482],
00:34:44.337       | 99.00th=[19792], 99.50th=[20055], 99.90th=[21627], 99.95th=[22414],
00:34:44.337       | 99.99th=[22938]
00:34:44.337    write: IOPS=4579, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets
00:34:44.337      slat (usec): min=11, max=6630, avg=108.83, stdev=553.69
00:34:44.337      clat (usec): min=605, max=22241, avg=14484.29, stdev=1934.74
00:34:44.337       lat (usec): min=5942, max=22397, avg=14593.12, stdev=1967.61
00:34:44.337      clat percentiles (usec):
00:34:44.337       |  1.00th=[ 7373],  5.00th=[11863], 10.00th=[12649], 20.00th=[13435],
00:34:44.337       | 30.00th=[13698], 40.00th=[14222], 50.00th=[14484], 60.00th=[15008],
00:34:44.337       | 70.00th=[15270], 80.00th=[15664], 90.00th=[16188], 95.00th=[17695],
00:34:44.337       | 99.00th=[20055], 99.50th=[21103], 99.90th=[22152], 99.95th=[22152],
00:34:44.337       | 99.99th=[22152]
00:34:44.337     bw (  KiB/s): min=17868, max=17888, per=34.12%, avg=17878.00, stdev=14.14, samples=2
00:34:44.337     iops        : min= 4467, max= 4472, avg=4469.50, stdev= 3.54, samples=2
00:34:44.337    lat (usec)   : 750=0.01%
00:34:44.337    lat (msec)   : 10=1.70%, 20=97.26%, 50=1.02%
00:34:44.337    cpu          : usr=3.88%, sys=12.55%, ctx=451, majf=0, minf=13
00:34:44.337    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3%
00:34:44.337       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:44.337       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:34:44.337       issued rwts: total=4096,4602,0,0 short=0,0,0,0 dropped=0,0,0,0
00:34:44.337       latency   : target=0, window=0, percentile=100.00%, depth=128
00:34:44.337  job3: (groupid=0, jobs=1): err= 0: pid=127906: Fri Dec 13 19:19:15 2024
00:34:44.337    read: IOPS=1523, BW=6095KiB/s (6242kB/s)(6144KiB/1008msec)
00:34:44.337      slat (usec): min=4, max=10815, avg=313.01, stdev=1302.13
00:34:44.337      clat (usec): min=29485, max=48974, avg=39030.79, stdev=3461.48
00:34:44.337       lat (usec): min=31438, max=48994, avg=39343.80, stdev=3433.24
00:34:44.337      clat percentiles (usec):
00:34:44.337       |  1.00th=[32113],  5.00th=[33817], 10.00th=[34341], 20.00th=[35914],
00:34:44.337       | 30.00th=[36963], 40.00th=[38011], 50.00th=[39060], 60.00th=[40109],
00:34:44.337       | 70.00th=[40633], 80.00th=[41681], 90.00th=[43254], 95.00th=[44303],
00:34:44.337       | 99.00th=[49021], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021],
00:34:44.337       | 99.99th=[49021]
00:34:44.337    write: IOPS=1709, BW=6837KiB/s (7001kB/s)(6892KiB/1008msec); 0 zone resets
00:34:44.337      slat (usec): min=12, max=9470, avg=294.13, stdev=1269.48
00:34:44.337      clat (usec): min=5107, max=47965, avg=38341.01, stdev=4966.60
00:34:44.337       lat (usec): min=9780, max=47991, avg=38635.14, stdev=4826.40
00:34:44.337      clat percentiles (usec):
00:34:44.337       |  1.00th=[12780],  5.00th=[30278], 10.00th=[35914], 20.00th=[37487],
00:34:44.337       | 30.00th=[38011], 40.00th=[38536], 50.00th=[39060], 60.00th=[39060],
00:34:44.337       | 70.00th=[40109], 80.00th=[41681], 90.00th=[42206], 95.00th=[43254],
00:34:44.337       | 99.00th=[45351], 99.50th=[45876], 99.90th=[47973], 99.95th=[47973],
00:34:44.337       | 99.99th=[47973]
00:34:44.337     bw (  KiB/s): min= 4888, max= 7887, per=12.19%, avg=6387.50, stdev=2120.61, samples=2
00:34:44.337     iops        : min= 1222, max= 1971, avg=1596.50, stdev=529.62, samples=2
00:34:44.337    lat (msec)   : 10=0.12%, 20=0.89%, 50=98.99%
00:34:44.337    cpu          : usr=2.28%, sys=5.26%, ctx=425, majf=0, minf=7
00:34:44.337    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1%
00:34:44.337       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:44.337       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:34:44.337       issued rwts: total=1536,1723,0,0 short=0,0,0,0 dropped=0,0,0,0
00:34:44.337       latency   : target=0, window=0, percentile=100.00%, depth=128
00:34:44.337  
00:34:44.337  Run status group 0 (all jobs):
00:34:44.337     READ: bw=45.8MiB/s (48.0MB/s), 6089KiB/s-18.1MiB/s (6235kB/s-19.0MB/s), io=46.2MiB (48.4MB), run=1004-1009msec
00:34:44.337    WRITE: bw=51.2MiB/s (53.7MB/s), 6837KiB/s-19.9MiB/s (7001kB/s-20.9MB/s), io=51.6MiB (54.1MB), run=1004-1009msec
00:34:44.337  
00:34:44.337  Disk stats (read/write):
00:34:44.337    nvme0n1: ios=4146/4416, merge=0/0, ticks=25054/25971, in_queue=51025, util=89.78%
00:34:44.337    nvme0n2: ios=1328/1536, merge=0/0, ticks=12083/13726, in_queue=25809, util=89.51%
00:34:44.337    nvme0n3: ios=3605/3944, merge=0/0, ticks=24780/24046, in_queue=48826, util=89.25%
00:34:44.337    nvme0n4: ios=1276/1536, merge=0/0, ticks=11805/13996, in_queue=25801, util=89.41%
00:34:44.337   19:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v
00:34:44.337  [global]
00:34:44.337  thread=1
00:34:44.337  invalidate=1
00:34:44.337  rw=randwrite
00:34:44.337  time_based=1
00:34:44.337  runtime=1
00:34:44.337  ioengine=libaio
00:34:44.337  direct=1
00:34:44.337  bs=4096
00:34:44.337  iodepth=128
00:34:44.337  norandommap=0
00:34:44.337  numjobs=1
00:34:44.337  
00:34:44.337  verify_dump=1
00:34:44.337  verify_backlog=512
00:34:44.337  verify_state_save=0
00:34:44.337  do_verify=1
00:34:44.337  verify=crc32c-intel
00:34:44.337  [job0]
00:34:44.337  filename=/dev/nvme0n1
00:34:44.337  [job1]
00:34:44.337  filename=/dev/nvme0n2
00:34:44.337  [job2]
00:34:44.337  filename=/dev/nvme0n3
00:34:44.337  [job3]
00:34:44.338  filename=/dev/nvme0n4
00:34:44.338  Could not set queue depth (nvme0n1)
00:34:44.338  Could not set queue depth (nvme0n2)
00:34:44.338  Could not set queue depth (nvme0n3)
00:34:44.338  Could not set queue depth (nvme0n4)
00:34:44.338  job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:34:44.338  job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:34:44.338  job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:34:44.338  job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:34:44.338  fio-3.35
00:34:44.338  Starting 4 threads
00:34:45.724  
00:34:45.724  job0: (groupid=0, jobs=1): err= 0: pid=127959: Fri Dec 13 19:19:17 2024
00:34:45.724    read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec)
00:34:45.724      slat (usec): min=8, max=7676, avg=130.91, stdev=683.54
00:34:45.724      clat (usec): min=10518, max=24246, avg=16786.28, stdev=2275.04
00:34:45.724       lat (usec): min=10538, max=24415, avg=16917.19, stdev=2293.93
00:34:45.724      clat percentiles (usec):
00:34:45.724       |  1.00th=[11600],  5.00th=[13042], 10.00th=[13960], 20.00th=[14615],
00:34:45.724       | 30.00th=[15401], 40.00th=[16319], 50.00th=[16909], 60.00th=[17433],
00:34:45.724       | 70.00th=[17957], 80.00th=[18744], 90.00th=[19792], 95.00th=[20579],
00:34:45.724       | 99.00th=[21890], 99.50th=[22938], 99.90th=[24249], 99.95th=[24249],
00:34:45.724       | 99.99th=[24249]
00:34:45.724    write: IOPS=4077, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets
00:34:45.724      slat (usec): min=11, max=12115, avg=122.31, stdev=681.57
00:34:45.724      clat (usec): min=762, max=28310, avg=16262.44, stdev=2420.46
00:34:45.724       lat (usec): min=7094, max=28347, avg=16384.76, stdev=2481.60
00:34:45.724      clat percentiles (usec):
00:34:45.724       |  1.00th=[ 8225],  5.00th=[13435], 10.00th=[14353], 20.00th=[14877],
00:34:45.724       | 30.00th=[15139], 40.00th=[15533], 50.00th=[16057], 60.00th=[16319],
00:34:45.724       | 70.00th=[16909], 80.00th=[17695], 90.00th=[19792], 95.00th=[21365],
00:34:45.724       | 99.00th=[22414], 99.50th=[23987], 99.90th=[24773], 99.95th=[27395],
00:34:45.724       | 99.99th=[28181]
00:34:45.724     bw (  KiB/s): min=15280, max=16416, per=31.35%, avg=15848.00, stdev=803.27, samples=2
00:34:45.724     iops        : min= 3820, max= 4104, avg=3962.00, stdev=200.82, samples=2
00:34:45.724    lat (usec)   : 1000=0.01%
00:34:45.724    lat (msec)   : 10=1.04%, 20=91.15%, 50=7.80%
00:34:45.724    cpu          : usr=3.80%, sys=11.69%, ctx=349, majf=0, minf=9
00:34:45.724    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2%
00:34:45.724       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:45.724       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:34:45.724       issued rwts: total=3584,4086,0,0 short=0,0,0,0 dropped=0,0,0,0
00:34:45.724       latency   : target=0, window=0, percentile=100.00%, depth=128
00:34:45.724  job1: (groupid=0, jobs=1): err= 0: pid=127960: Fri Dec 13 19:19:17 2024
00:34:45.724    read: IOPS=4100, BW=16.0MiB/s (16.8MB/s)(16.2MiB/1010msec)
00:34:45.724      slat (usec): min=5, max=13815, avg=114.59, stdev=818.55
00:34:45.724      clat (usec): min=5762, max=35962, avg=15196.59, stdev=4065.98
00:34:45.724       lat (usec): min=5773, max=35985, avg=15311.19, stdev=4107.81
00:34:45.724      clat percentiles (usec):
00:34:45.724       |  1.00th=[ 7570],  5.00th=[10159], 10.00th=[11207], 20.00th=[12256],
00:34:45.724       | 30.00th=[12911], 40.00th=[13829], 50.00th=[14222], 60.00th=[15270],
00:34:45.724       | 70.00th=[16319], 80.00th=[17957], 90.00th=[20579], 95.00th=[21890],
00:34:45.724       | 99.00th=[28181], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914],
00:34:45.724       | 99.99th=[35914]
00:34:45.724    write: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec); 0 zone resets
00:34:45.724      slat (usec): min=4, max=12067, avg=107.18, stdev=760.51
00:34:45.724      clat (usec): min=4393, max=28184, avg=14142.27, stdev=2710.24
00:34:45.724       lat (usec): min=4411, max=28194, avg=14249.45, stdev=2794.20
00:34:45.724      clat percentiles (usec):
00:34:45.724       |  1.00th=[ 6325],  5.00th=[ 9241], 10.00th=[10814], 20.00th=[12387],
00:34:45.724       | 30.00th=[13435], 40.00th=[13960], 50.00th=[14615], 60.00th=[15008],
00:34:45.724       | 70.00th=[15139], 80.00th=[15533], 90.00th=[15926], 95.00th=[18482],
00:34:45.724       | 99.00th=[22152], 99.50th=[24249], 99.90th=[26608], 99.95th=[27919],
00:34:45.724       | 99.99th=[28181]
00:34:45.724     bw (  KiB/s): min=17968, max=18248, per=35.82%, avg=18108.00, stdev=197.99, samples=2
00:34:45.724     iops        : min= 4492, max= 4562, avg=4527.00, stdev=49.50, samples=2
00:34:45.724    lat (msec)   : 10=5.89%, 20=86.72%, 50=7.39%
00:34:45.724    cpu          : usr=4.46%, sys=10.80%, ctx=362, majf=0, minf=13
00:34:45.724    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3%
00:34:45.724       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:45.724       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:34:45.724       issued rwts: total=4142,4608,0,0 short=0,0,0,0 dropped=0,0,0,0
00:34:45.724       latency   : target=0, window=0, percentile=100.00%, depth=128
00:34:45.724  job2: (groupid=0, jobs=1): err= 0: pid=127961: Fri Dec 13 19:19:17 2024
00:34:45.724    read: IOPS=1329, BW=5316KiB/s (5444kB/s)(5380KiB/1012msec)
00:34:45.724      slat (usec): min=6, max=31587, avg=327.18, stdev=2116.23
00:34:45.724      clat (msec): min=3, max=104, avg=37.06, stdev=24.14
00:34:45.724       lat (msec): min=16, max=104, avg=37.39, stdev=24.29
00:34:45.724      clat percentiles (msec):
00:34:45.724       |  1.00th=[   17],  5.00th=[   24], 10.00th=[   25], 20.00th=[   25],
00:34:45.724       | 30.00th=[   26], 40.00th=[   26], 50.00th=[   27], 60.00th=[   28],
00:34:45.724       | 70.00th=[   28], 80.00th=[   35], 90.00th=[   82], 95.00th=[   99],
00:34:45.724       | 99.00th=[  104], 99.50th=[  104], 99.90th=[  106], 99.95th=[  106],
00:34:45.724       | 99.99th=[  106]
00:34:45.724    write: IOPS=1517, BW=6071KiB/s (6217kB/s)(6144KiB/1012msec); 0 zone resets
00:34:45.724      slat (usec): min=15, max=32758, avg=360.93, stdev=2379.58
00:34:45.724      clat (msec): min=18, max=126, avg=48.07, stdev=29.02
00:34:45.724       lat (msec): min=18, max=126, avg=48.43, stdev=29.15
00:34:45.724      clat percentiles (msec):
00:34:45.724       |  1.00th=[   20],  5.00th=[   24], 10.00th=[   25], 20.00th=[   25],
00:34:45.724       | 30.00th=[   26], 40.00th=[   26], 50.00th=[   30], 60.00th=[   43],
00:34:45.724       | 70.00th=[   61], 80.00th=[   82], 90.00th=[  101], 95.00th=[  103],
00:34:45.724       | 99.00th=[  116], 99.50th=[  126], 99.90th=[  127], 99.95th=[  127],
00:34:45.724       | 99.99th=[  127]
00:34:45.724     bw (  KiB/s): min= 4288, max= 8000, per=12.15%, avg=6144.00, stdev=2624.78, samples=2
00:34:45.724     iops        : min= 1072, max= 2000, avg=1536.00, stdev=656.20, samples=2
00:34:45.724    lat (msec)   : 4=0.03%, 20=2.15%, 50=69.04%, 100=22.94%, 250=5.83%
00:34:45.724    cpu          : usr=1.38%, sys=4.95%, ctx=100, majf=0, minf=16
00:34:45.725    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8%
00:34:45.725       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:45.725       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:34:45.725       issued rwts: total=1345,1536,0,0 short=0,0,0,0 dropped=0,0,0,0
00:34:45.725       latency   : target=0, window=0, percentile=100.00%, depth=128
00:34:45.725  job3: (groupid=0, jobs=1): err= 0: pid=127962: Fri Dec 13 19:19:17 2024
00:34:45.725    read: IOPS=2482, BW=9929KiB/s (10.2MB/s)(9.79MiB/1010msec)
00:34:45.725      slat (usec): min=6, max=18754, avg=180.31, stdev=1165.77
00:34:45.725      clat (usec): min=6694, max=69554, avg=22414.27, stdev=8994.92
00:34:45.725       lat (usec): min=9634, max=69573, avg=22594.58, stdev=9091.88
00:34:45.725      clat percentiles (usec):
00:34:45.725       |  1.00th=[12649],  5.00th=[13042], 10.00th=[15533], 20.00th=[16712],
00:34:45.725       | 30.00th=[18220], 40.00th=[19006], 50.00th=[20317], 60.00th=[21365],
00:34:45.725       | 70.00th=[22676], 80.00th=[26084], 90.00th=[29492], 95.00th=[43254],
00:34:45.725       | 99.00th=[62129], 99.50th=[63177], 99.90th=[69731], 99.95th=[69731],
00:34:45.725       | 99.99th=[69731]
00:34:45.725    write: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec); 0 zone resets
00:34:45.725      slat (usec): min=6, max=17550, avg=206.39, stdev=1106.42
00:34:45.725      clat (usec): min=5012, max=69513, avg=28023.33, stdev=15864.14
00:34:45.725       lat (usec): min=5038, max=69524, avg=28229.71, stdev=15979.82
00:34:45.725      clat percentiles (usec):
00:34:45.725       |  1.00th=[10552],  5.00th=[11863], 10.00th=[12518], 20.00th=[14877],
00:34:45.725       | 30.00th=[16450], 40.00th=[17695], 50.00th=[18744], 60.00th=[21365],
00:34:45.725       | 70.00th=[45351], 80.00th=[49546], 90.00th=[50594], 95.00th=[52691],
00:34:45.725       | 99.00th=[54789], 99.50th=[54789], 99.90th=[69731], 99.95th=[69731],
00:34:45.725       | 99.99th=[69731]
00:34:45.725     bw (  KiB/s): min= 9346, max=11152, per=20.27%, avg=10249.00, stdev=1277.03, samples=2
00:34:45.725     iops        : min= 2336, max= 2788, avg=2562.00, stdev=319.61, samples=2
00:34:45.725    lat (msec)   : 10=0.55%, 20=50.15%, 50=38.74%, 100=10.56%
00:34:45.725    cpu          : usr=2.18%, sys=7.83%, ctx=229, majf=0, minf=13
00:34:45.725    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8%
00:34:45.725       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:45.725       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:34:45.725       issued rwts: total=2507,2560,0,0 short=0,0,0,0 dropped=0,0,0,0
00:34:45.725       latency   : target=0, window=0, percentile=100.00%, depth=128
00:34:45.725  
00:34:45.725  Run status group 0 (all jobs):
00:34:45.725     READ: bw=44.7MiB/s (46.9MB/s), 5316KiB/s-16.0MiB/s (5444kB/s-16.8MB/s), io=45.2MiB (47.4MB), run=1002-1012msec
00:34:45.725    WRITE: bw=49.4MiB/s (51.8MB/s), 6071KiB/s-17.8MiB/s (6217kB/s-18.7MB/s), io=50.0MiB (52.4MB), run=1002-1012msec
00:34:45.725  
00:34:45.725  Disk stats (read/write):
00:34:45.725    nvme0n1: ios=3122/3527, merge=0/0, ticks=24395/25381, in_queue=49776, util=88.38%
00:34:45.725    nvme0n2: ios=3633/3867, merge=0/0, ticks=51884/50758, in_queue=102642, util=89.28%
00:34:45.725    nvme0n3: ios=1126/1536, merge=0/0, ticks=8299/17808, in_queue=26107, util=89.18%
00:34:45.725    nvme0n4: ios=1932/2048, merge=0/0, ticks=42814/61872, in_queue=104686, util=89.72%
00:34:45.725   19:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync
00:34:45.725   19:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=127976
00:34:45.725   19:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10
00:34:45.725   19:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3
00:34:45.725  [global]
00:34:45.725  thread=1
00:34:45.725  invalidate=1
00:34:45.725  rw=read
00:34:45.725  time_based=1
00:34:45.725  runtime=10
00:34:45.725  ioengine=libaio
00:34:45.725  direct=1
00:34:45.725  bs=4096
00:34:45.725  iodepth=1
00:34:45.725  norandommap=1
00:34:45.725  numjobs=1
00:34:45.725  
00:34:45.725  [job0]
00:34:45.725  filename=/dev/nvme0n1
00:34:45.725  [job1]
00:34:45.725  filename=/dev/nvme0n2
00:34:45.725  [job2]
00:34:45.725  filename=/dev/nvme0n3
00:34:45.725  [job3]
00:34:45.725  filename=/dev/nvme0n4
00:34:45.725  Could not set queue depth (nvme0n1)
00:34:45.725  Could not set queue depth (nvme0n2)
00:34:45.725  Could not set queue depth (nvme0n3)
00:34:45.725  Could not set queue depth (nvme0n4)
00:34:45.725  job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:34:45.725  job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:34:45.725  job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:34:45.725  job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:34:45.725  fio-3.35
00:34:45.725  Starting 4 threads
00:34:49.009   19:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0
00:34:49.009  fio: pid=128025, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:34:49.009  fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=31629312, buflen=4096
00:34:49.009   19:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0
00:34:49.009  fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=39972864, buflen=4096
00:34:49.009  fio: pid=128024, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:34:49.009   19:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:34:49.009   19:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0
00:34:49.268  fio: pid=128022, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:34:49.268  fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=37658624, buflen=4096
00:34:49.268   19:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:34:49.268   19:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1
00:34:49.527  fio: pid=128023, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:34:49.527  fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=53329920, buflen=4096
00:34:49.527  
00:34:49.527  job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=128022: Fri Dec 13 19:19:21 2024
00:34:49.527    read: IOPS=2741, BW=10.7MiB/s (11.2MB/s)(35.9MiB/3354msec)
00:34:49.527      slat (usec): min=6, max=11763, avg=17.47, stdev=198.40
00:34:49.527      clat (usec): min=144, max=5190, avg=345.90, stdev=117.30
00:34:49.527       lat (usec): min=153, max=12044, avg=363.37, stdev=229.96
00:34:49.527      clat percentiles (usec):
00:34:49.527       |  1.00th=[  165],  5.00th=[  215], 10.00th=[  237], 20.00th=[  289],
00:34:49.527       | 30.00th=[  310], 40.00th=[  326], 50.00th=[  343], 60.00th=[  355],
00:34:49.527       | 70.00th=[  375], 80.00th=[  400], 90.00th=[  441], 95.00th=[  486],
00:34:49.527       | 99.00th=[  578], 99.50th=[  668], 99.90th=[ 1029], 99.95th=[ 2474],
00:34:49.527       | 99.99th=[ 5211]
00:34:49.527     bw (  KiB/s): min=10320, max=11256, per=24.42%, avg=10710.67, stdev=339.01, samples=6
00:34:49.527     iops        : min= 2580, max= 2814, avg=2677.67, stdev=84.75, samples=6
00:34:49.527    lat (usec)   : 250=12.58%, 500=83.69%, 750=3.41%, 1000=0.20%
00:34:49.527    lat (msec)   : 2=0.03%, 4=0.05%, 10=0.02%
00:34:49.527    cpu          : usr=0.66%, sys=3.40%, ctx=9206, majf=0, minf=1
00:34:49.527    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:34:49.527       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:49.527       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:49.527       issued rwts: total=9195,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:34:49.527       latency   : target=0, window=0, percentile=100.00%, depth=1
00:34:49.527  job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=128023: Fri Dec 13 19:19:21 2024
00:34:49.527    read: IOPS=3595, BW=14.0MiB/s (14.7MB/s)(50.9MiB/3621msec)
00:34:49.527      slat (usec): min=6, max=11311, avg=20.20, stdev=182.25
00:34:49.527      clat (usec): min=141, max=5733, avg=256.52, stdev=95.22
00:34:49.527       lat (usec): min=151, max=12748, avg=276.72, stdev=210.98
00:34:49.527      clat percentiles (usec):
00:34:49.527       |  1.00th=[  155],  5.00th=[  167], 10.00th=[  192], 20.00th=[  217],
00:34:49.527       | 30.00th=[  231], 40.00th=[  243], 50.00th=[  255], 60.00th=[  265],
00:34:49.527       | 70.00th=[  277], 80.00th=[  293], 90.00th=[  314], 95.00th=[  334],
00:34:49.527       | 99.00th=[  379], 99.50th=[  412], 99.90th=[  947], 99.95th=[ 2507],
00:34:49.527       | 99.99th=[ 3556]
00:34:49.527     bw (  KiB/s): min=13072, max=16078, per=32.73%, avg=14352.86, stdev=892.08, samples=7
00:34:49.527     iops        : min= 3268, max= 4019, avg=3588.14, stdev=222.86, samples=7
00:34:49.527    lat (usec)   : 250=46.20%, 500=53.55%, 750=0.12%, 1000=0.02%
00:34:49.527    lat (msec)   : 2=0.04%, 4=0.05%, 10=0.01%
00:34:49.527    cpu          : usr=0.97%, sys=4.59%, ctx=13036, majf=0, minf=2
00:34:49.527    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:34:49.527       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:49.527       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:49.527       issued rwts: total=13021,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:34:49.527       latency   : target=0, window=0, percentile=100.00%, depth=1
00:34:49.527  job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=128024: Fri Dec 13 19:19:21 2024
00:34:49.527    read: IOPS=3118, BW=12.2MiB/s (12.8MB/s)(38.1MiB/3130msec)
00:34:49.527      slat (usec): min=7, max=10663, avg=20.93, stdev=134.91
00:34:49.527      clat (usec): min=3, max=7655, avg=298.21, stdev=103.72
00:34:49.527       lat (usec): min=206, max=10931, avg=319.15, stdev=169.61
00:34:49.527      clat percentiles (usec):
00:34:49.527       |  1.00th=[  239],  5.00th=[  253], 10.00th=[  262], 20.00th=[  269],
00:34:49.527       | 30.00th=[  277], 40.00th=[  285], 50.00th=[  293], 60.00th=[  302],
00:34:49.527       | 70.00th=[  310], 80.00th=[  318], 90.00th=[  334], 95.00th=[  351],
00:34:49.527       | 99.00th=[  392], 99.50th=[  420], 99.90th=[  922], 99.95th=[ 2474],
00:34:49.527       | 99.99th=[ 7635]
00:34:49.527     bw (  KiB/s): min=12056, max=12792, per=28.53%, avg=12509.33, stdev=267.91, samples=6
00:34:49.527     iops        : min= 3014, max= 3198, avg=3127.33, stdev=66.98, samples=6
00:34:49.527    lat (usec)   : 4=0.01%, 250=3.48%, 500=96.21%, 750=0.16%, 1000=0.04%
00:34:49.527    lat (msec)   : 2=0.02%, 4=0.04%, 10=0.02%
00:34:49.527    cpu          : usr=0.86%, sys=4.60%, ctx=9768, majf=0, minf=2
00:34:49.527    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:34:49.527       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:49.527       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:49.527       issued rwts: total=9760,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:34:49.527       latency   : target=0, window=0, percentile=100.00%, depth=1
00:34:49.527  job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=128025: Fri Dec 13 19:19:21 2024
00:34:49.527    read: IOPS=2672, BW=10.4MiB/s (10.9MB/s)(30.2MiB/2890msec)
00:34:49.527      slat (nsec): min=10188, max=68113, avg=15673.25, stdev=5674.84
00:34:49.527      clat (usec): min=167, max=7460, avg=356.93, stdev=119.63
00:34:49.527       lat (usec): min=187, max=7481, avg=372.61, stdev=119.86
00:34:49.527      clat percentiles (usec):
00:34:49.527       |  1.00th=[  212],  5.00th=[  245], 10.00th=[  277], 20.00th=[  302],
00:34:49.527       | 30.00th=[  318], 40.00th=[  334], 50.00th=[  347], 60.00th=[  363],
00:34:49.527       | 70.00th=[  383], 80.00th=[  404], 90.00th=[  441], 95.00th=[  482],
00:34:49.527       | 99.00th=[  570], 99.50th=[  635], 99.90th=[ 1012], 99.95th=[ 1336],
00:34:49.527       | 99.99th=[ 7439]
00:34:49.527     bw (  KiB/s): min=10344, max=11256, per=24.43%, avg=10713.60, stdev=374.14, samples=5
00:34:49.527     iops        : min= 2586, max= 2814, avg=2678.40, stdev=93.54, samples=5
00:34:49.527    lat (usec)   : 250=5.57%, 500=90.88%, 750=3.28%, 1000=0.16%
00:34:49.527    lat (msec)   : 2=0.06%, 4=0.03%, 10=0.01%
00:34:49.527    cpu          : usr=0.90%, sys=3.67%, ctx=7726, majf=0, minf=2
00:34:49.527    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:34:49.527       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:49.527       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:34:49.527       issued rwts: total=7723,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:34:49.527       latency   : target=0, window=0, percentile=100.00%, depth=1
00:34:49.527  
00:34:49.527  Run status group 0 (all jobs):
00:34:49.527     READ: bw=42.8MiB/s (44.9MB/s), 10.4MiB/s-14.0MiB/s (10.9MB/s-14.7MB/s), io=155MiB (163MB), run=2890-3621msec
00:34:49.527  
00:34:49.527  Disk stats (read/write):
00:34:49.527    nvme0n1: ios=8380/0, merge=0/0, ticks=2857/0, in_queue=2857, util=95.72%
00:34:49.527    nvme0n2: ios=13012/0, merge=0/0, ticks=3374/0, in_queue=3374, util=95.56%
00:34:49.527    nvme0n3: ios=9730/0, merge=0/0, ticks=2945/0, in_queue=2945, util=96.24%
00:34:49.527    nvme0n4: ios=7671/0, merge=0/0, ticks=2731/0, in_queue=2731, util=96.63%
00:34:49.527   19:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:34:49.527   19:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2
00:34:50.094   19:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:34:50.094   19:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3
00:34:50.352   19:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:34:50.352   19:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4
00:34:50.611   19:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:34:50.611   19:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5
00:34:50.869   19:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:34:50.869   19:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6
00:34:51.128   19:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0
00:34:51.128   19:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 127976
00:34:51.128   19:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4
00:34:51.128   19:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:34:51.128  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:34:51.128   19:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:34:51.128   19:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0
00:34:51.128   19:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:34:51.128   19:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:34:51.128   19:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:34:51.128   19:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:34:51.128   19:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0
00:34:51.128   19:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']'
00:34:51.128  nvmf hotplug test: fio failed as expected
00:34:51.128   19:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected'
00:34:51.128   19:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:34:51.387   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state
00:34:51.387   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state
00:34:51.387   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state
00:34:51.387   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT
00:34:51.387   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini
00:34:51.387   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup
00:34:51.387   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync
00:34:51.387   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:34:51.387   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e
00:34:51.387   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20}
00:34:51.387   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:34:51.387  rmmod nvme_tcp
00:34:51.387  rmmod nvme_fabrics
00:34:51.387  rmmod nvme_keyring
00:34:51.387   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:34:51.387   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e
00:34:51.387   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0
00:34:51.387   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 127502 ']'
00:34:51.387   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 127502
00:34:51.387   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 127502 ']'
00:34:51.387   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 127502
00:34:51.387    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname
00:34:51.387   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:34:51.387    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 127502
00:34:51.387   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:34:51.387   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:34:51.387  killing process with pid 127502
00:34:51.387   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 127502'
00:34:51.388   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 127502
00:34:51.388   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 127502
00:34:51.646   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:34:51.646   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:34:51.646   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:34:51.646   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr
00:34:51.646   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save
00:34:51.646   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:34:51.646   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore
00:34:51.646   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:34:51.646   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:34:51.646   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:34:51.905   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:34:51.905   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:34:51.905   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:34:51.905   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:34:51.905   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:34:51.905   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:34:51.905   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:34:51.905   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:34:51.905   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:34:51.905   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:34:51.905   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:34:51.905   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:34:51.905   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns
00:34:51.905   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:34:51.905   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:34:51.905    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:34:51.905   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0
00:34:51.905  
00:34:51.905  real	0m19.989s
00:34:51.905  user	0m59.205s
00:34:51.905  sys	0m10.171s
00:34:51.905   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable
00:34:51.905   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:34:51.905  ************************************
00:34:51.905  END TEST nvmf_fio_target
00:34:51.905  ************************************
00:34:52.165   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode
00:34:52.165   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:34:52.165   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:34:52.165   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:34:52.165  ************************************
00:34:52.165  START TEST nvmf_bdevio
00:34:52.165  ************************************
00:34:52.165   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode
00:34:52.165  * Looking for test storage...
00:34:52.165  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:34:52.165     19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:34:52.165     19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-:
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-:
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<'
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 ))
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:34:52.165     19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1
00:34:52.165     19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1
00:34:52.165     19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:34:52.165     19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1
00:34:52.165     19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2
00:34:52.165     19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2
00:34:52.165     19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:34:52.165     19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:34:52.165  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:52.165  		--rc genhtml_branch_coverage=1
00:34:52.165  		--rc genhtml_function_coverage=1
00:34:52.165  		--rc genhtml_legend=1
00:34:52.165  		--rc geninfo_all_blocks=1
00:34:52.165  		--rc geninfo_unexecuted_blocks=1
00:34:52.165  		
00:34:52.165  		'
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:34:52.165  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:52.165  		--rc genhtml_branch_coverage=1
00:34:52.165  		--rc genhtml_function_coverage=1
00:34:52.165  		--rc genhtml_legend=1
00:34:52.165  		--rc geninfo_all_blocks=1
00:34:52.165  		--rc geninfo_unexecuted_blocks=1
00:34:52.165  		
00:34:52.165  		'
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:34:52.165  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:52.165  		--rc genhtml_branch_coverage=1
00:34:52.165  		--rc genhtml_function_coverage=1
00:34:52.165  		--rc genhtml_legend=1
00:34:52.165  		--rc geninfo_all_blocks=1
00:34:52.165  		--rc geninfo_unexecuted_blocks=1
00:34:52.165  		
00:34:52.165  		'
00:34:52.165    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:34:52.165  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:52.165  		--rc genhtml_branch_coverage=1
00:34:52.165  		--rc genhtml_function_coverage=1
00:34:52.165  		--rc genhtml_legend=1
00:34:52.165  		--rc geninfo_all_blocks=1
00:34:52.166  		--rc geninfo_unexecuted_blocks=1
00:34:52.166  		
00:34:52.166  		'
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:34:52.166     19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:34:52.166     19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:34:52.166     19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob
00:34:52.166     19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:34:52.166     19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:34:52.166     19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:34:52.166      19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:52.166      19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:52.166      19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:52.166      19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH
00:34:52.166      19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:34:52.166    19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:34:52.166  Cannot find device "nvmf_init_br"
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # true
00:34:52.166   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:34:52.426  Cannot find device "nvmf_init_br2"
00:34:52.426   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # true
00:34:52.426   19:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:34:52.426  Cannot find device "nvmf_tgt_br"
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # true
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:34:52.426  Cannot find device "nvmf_tgt_br2"
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # true
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:34:52.426  Cannot find device "nvmf_init_br"
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # true
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:34:52.426  Cannot find device "nvmf_init_br2"
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # true
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:34:52.426  Cannot find device "nvmf_tgt_br"
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # true
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:34:52.426  Cannot find device "nvmf_tgt_br2"
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # true
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:34:52.426  Cannot find device "nvmf_br"
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # true
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:34:52.426  Cannot find device "nvmf_init_if"
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # true
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:34:52.426  Cannot find device "nvmf_init_if2"
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # true
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:34:52.426  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # true
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:34:52.426  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # true
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:34:52.426   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:34:52.685  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:34:52.685  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms
00:34:52.685  
00:34:52.685  --- 10.0.0.3 ping statistics ---
00:34:52.685  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:52.685  rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:34:52.685  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:34:52.685  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms
00:34:52.685  
00:34:52.685  --- 10.0.0.4 ping statistics ---
00:34:52.685  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:52.685  rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:34:52.685  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:34:52.685  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms
00:34:52.685  
00:34:52.685  --- 10.0.0.1 ping statistics ---
00:34:52.685  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:52.685  rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:34:52.685  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:34:52.685  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms
00:34:52.685  
00:34:52.685  --- 10.0.0.2 ping statistics ---
00:34:52.685  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:52.685  rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=128391
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 128391
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 128391 ']'
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100
00:34:52.685  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable
00:34:52.685   19:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:34:52.685  [2024-12-13 19:19:24.479073] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:34:52.685  [2024-12-13 19:19:24.480437] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:34:52.685  [2024-12-13 19:19:24.481081] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:34:52.944  [2024-12-13 19:19:24.631933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:34:52.944  [2024-12-13 19:19:24.671163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:34:52.944  [2024-12-13 19:19:24.671215] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:34:52.944  [2024-12-13 19:19:24.671268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:34:52.944  [2024-12-13 19:19:24.671276] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:34:52.944  [2024-12-13 19:19:24.671282] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:34:52.944  [2024-12-13 19:19:24.672851] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4
00:34:52.944  [2024-12-13 19:19:24.672987] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5
00:34:52.944  [2024-12-13 19:19:24.673078] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6
00:34:52.944  [2024-12-13 19:19:24.673160] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:34:52.944  [2024-12-13 19:19:24.762985] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:34:52.944  [2024-12-13 19:19:24.763353] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:34:52.944  [2024-12-13 19:19:24.763492] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:34:52.944  [2024-12-13 19:19:24.764072] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:34:52.944  [2024-12-13 19:19:24.764577] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode.
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:34:53.882  [2024-12-13 19:19:25.502336] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:34:53.882  Malloc0
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:34:53.882  [2024-12-13 19:19:25.590446] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:53.882   19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62
00:34:53.882    19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json
00:34:53.882    19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=()
00:34:53.882    19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config
00:34:53.882    19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:34:53.882    19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:34:53.882  {
00:34:53.882    "params": {
00:34:53.882      "name": "Nvme$subsystem",
00:34:53.882      "trtype": "$TEST_TRANSPORT",
00:34:53.882      "traddr": "$NVMF_FIRST_TARGET_IP",
00:34:53.882      "adrfam": "ipv4",
00:34:53.882      "trsvcid": "$NVMF_PORT",
00:34:53.882      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:34:53.882      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:34:53.882      "hdgst": ${hdgst:-false},
00:34:53.882      "ddgst": ${ddgst:-false}
00:34:53.882    },
00:34:53.882    "method": "bdev_nvme_attach_controller"
00:34:53.882  }
00:34:53.882  EOF
00:34:53.882  )")
00:34:53.882     19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat
00:34:53.882    19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq .
00:34:53.882     19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=,
00:34:53.882     19:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:34:53.882    "params": {
00:34:53.882      "name": "Nvme1",
00:34:53.882      "trtype": "tcp",
00:34:53.882      "traddr": "10.0.0.3",
00:34:53.882      "adrfam": "ipv4",
00:34:53.882      "trsvcid": "4420",
00:34:53.882      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:34:53.882      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:34:53.882      "hdgst": false,
00:34:53.882      "ddgst": false
00:34:53.882    },
00:34:53.882    "method": "bdev_nvme_attach_controller"
00:34:53.882  }'
00:34:53.882  [2024-12-13 19:19:25.659180] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:34:53.882  [2024-12-13 19:19:25.659296] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128445 ]
00:34:54.141  [2024-12-13 19:19:25.804450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:34:54.141  [2024-12-13 19:19:25.846465] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:34:54.141  [2024-12-13 19:19:25.846632] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:34:54.141  [2024-12-13 19:19:25.846632] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:34:54.400  I/O targets:
00:34:54.400    Nvme1n1: 131072 blocks of 512 bytes (64 MiB)
00:34:54.400  
00:34:54.400  
00:34:54.400       CUnit - A unit testing framework for C - Version 2.1-3
00:34:54.400       http://cunit.sourceforge.net/
00:34:54.400  
00:34:54.400  
00:34:54.400  Suite: bdevio tests on: Nvme1n1
00:34:54.400    Test: blockdev write read block ...passed
00:34:54.400    Test: blockdev write zeroes read block ...passed
00:34:54.400    Test: blockdev write zeroes read no split ...passed
00:34:54.400    Test: blockdev write zeroes read split ...passed
00:34:54.400    Test: blockdev write zeroes read split partial ...passed
00:34:54.400    Test: blockdev reset ...[2024-12-13 19:19:26.175938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller
00:34:54.400  [2024-12-13 19:19:26.176036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17531b0 (9): Bad file descriptor
00:34:54.400  [2024-12-13 19:19:26.179584] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful.
00:34:54.400  passed
00:34:54.400    Test: blockdev write read 8 blocks ...passed
00:34:54.400    Test: blockdev write read size > 128k ...passed
00:34:54.400    Test: blockdev write read invalid size ...passed
00:34:54.400    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:34:54.400    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:34:54.400    Test: blockdev write read max offset ...passed
00:34:54.658    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:34:54.658    Test: blockdev writev readv 8 blocks ...passed
00:34:54.658    Test: blockdev writev readv 30 x 1block ...passed
00:34:54.658    Test: blockdev writev readv block ...passed
00:34:54.658    Test: blockdev writev readv size > 128k ...passed
00:34:54.658    Test: blockdev writev readv size > 128k in two iovs ...passed
00:34:54.658    Test: blockdev comparev and writev ...[2024-12-13 19:19:26.352215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:34:54.658  [2024-12-13 19:19:26.352273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:34:54.658  [2024-12-13 19:19:26.352292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:34:54.658  [2024-12-13 19:19:26.352303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:34:54.658  [2024-12-13 19:19:26.352970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:34:54.658  [2024-12-13 19:19:26.352995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:34:54.658  [2024-12-13 19:19:26.353014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:34:54.658  [2024-12-13 19:19:26.353024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:34:54.658  [2024-12-13 19:19:26.353577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:34:54.658  [2024-12-13 19:19:26.353614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:34:54.658  [2024-12-13 19:19:26.353629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:34:54.658  [2024-12-13 19:19:26.353639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:34:54.658  [2024-12-13 19:19:26.354234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:34:54.658  [2024-12-13 19:19:26.354290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:34:54.658  [2024-12-13 19:19:26.354306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:34:54.658  [2024-12-13 19:19:26.354316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:34:54.658  passed
00:34:54.658    Test: blockdev nvme passthru rw ...passed
00:34:54.658    Test: blockdev nvme passthru vendor specific ...[2024-12-13 19:19:26.437543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:34:54.658  [2024-12-13 19:19:26.437568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:34:54.658  [2024-12-13 19:19:26.437890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:34:54.658  [2024-12-13 19:19:26.437916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:34:54.658  [2024-12-13 19:19:26.438142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:34:54.658  [2024-12-13 19:19:26.438166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:34:54.658  [2024-12-13 19:19:26.438440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:34:54.658  [2024-12-13 19:19:26.438464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:34:54.658  passed
00:34:54.658    Test: blockdev nvme admin passthru ...passed
00:34:54.917    Test: blockdev copy ...passed
00:34:54.917  
00:34:54.917  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:34:54.917                suites      1      1    n/a      0        0
00:34:54.917                 tests     23     23     23      0        0
00:34:54.917               asserts    152    152    152      0      n/a
00:34:54.917  
00:34:54.917  Elapsed time =    0.875 seconds
00:34:55.175   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:34:55.175   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:55.175   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:34:55.175   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:55.175   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT
00:34:55.175   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini
00:34:55.175   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup
00:34:55.175   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync
00:34:55.175   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:34:55.175   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e
00:34:55.175   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20}
00:34:55.175   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:34:55.175  rmmod nvme_tcp
00:34:55.175  rmmod nvme_fabrics
00:34:55.175  rmmod nvme_keyring
00:34:55.175   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:34:55.175   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e
00:34:55.175   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0
00:34:55.176   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 128391 ']'
00:34:55.176   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 128391
00:34:55.176   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 128391 ']'
00:34:55.176   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 128391
00:34:55.176    19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname
00:34:55.176   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:34:55.176    19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 128391
00:34:55.176   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3
00:34:55.176   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']'
00:34:55.176  killing process with pid 128391
00:34:55.176   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 128391'
00:34:55.176   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 128391
00:34:55.176   19:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 128391
00:34:55.434   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:34:55.434   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:34:55.434   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:34:55.434   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr
00:34:55.434   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save
00:34:55.434   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:34:55.434   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore
00:34:55.435   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:34:55.435   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:34:55.435   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:34:55.435   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:34:55.435   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:34:55.435   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:34:55.435   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:34:55.435   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:34:55.435   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:34:55.435   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:34:55.435   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:34:55.693   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:34:55.693   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:34:55.693   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:34:55.693   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:34:55.693   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns
00:34:55.693   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:34:55.693   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:34:55.693    19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:34:55.693   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0
00:34:55.693  
00:34:55.693  real	0m3.625s
00:34:55.693  user	0m7.998s
00:34:55.693  sys	0m1.253s
00:34:55.693   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:34:55.693   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:34:55.693  ************************************
00:34:55.693  END TEST nvmf_bdevio
00:34:55.693  ************************************
00:34:55.693   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:34:55.693  
00:34:55.693  real	3m30.622s
00:34:55.693  user	9m33.438s
00:34:55.693  sys	1m19.077s
00:34:55.693   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable
00:34:55.693   19:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:34:55.693  ************************************
00:34:55.693  END TEST nvmf_target_core_interrupt_mode
00:34:55.693  ************************************
00:34:55.693   19:19:27 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode
00:34:55.693   19:19:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:34:55.693   19:19:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable
00:34:55.693   19:19:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:34:55.693  ************************************
00:34:55.693  START TEST nvmf_interrupt
00:34:55.693  ************************************
00:34:55.693   19:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode
00:34:55.952  * Looking for test storage...
00:34:55.952  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:34:55.952     19:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version
00:34:55.952     19:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-:
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-:
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<'
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 ))
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:34:55.952     19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1
00:34:55.952     19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1
00:34:55.952     19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:34:55.952     19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1
00:34:55.952     19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2
00:34:55.952     19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2
00:34:55.952     19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:34:55.952     19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:34:55.952  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:55.952  		--rc genhtml_branch_coverage=1
00:34:55.952  		--rc genhtml_function_coverage=1
00:34:55.952  		--rc genhtml_legend=1
00:34:55.952  		--rc geninfo_all_blocks=1
00:34:55.952  		--rc geninfo_unexecuted_blocks=1
00:34:55.952  		
00:34:55.952  		'
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:34:55.952  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:55.952  		--rc genhtml_branch_coverage=1
00:34:55.952  		--rc genhtml_function_coverage=1
00:34:55.952  		--rc genhtml_legend=1
00:34:55.952  		--rc geninfo_all_blocks=1
00:34:55.952  		--rc geninfo_unexecuted_blocks=1
00:34:55.952  		
00:34:55.952  		'
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:34:55.952  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:55.952  		--rc genhtml_branch_coverage=1
00:34:55.952  		--rc genhtml_function_coverage=1
00:34:55.952  		--rc genhtml_legend=1
00:34:55.952  		--rc geninfo_all_blocks=1
00:34:55.952  		--rc geninfo_unexecuted_blocks=1
00:34:55.952  		
00:34:55.952  		'
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:34:55.952  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:55.952  		--rc genhtml_branch_coverage=1
00:34:55.952  		--rc genhtml_function_coverage=1
00:34:55.952  		--rc genhtml_legend=1
00:34:55.952  		--rc geninfo_all_blocks=1
00:34:55.952  		--rc geninfo_unexecuted_blocks=1
00:34:55.952  		
00:34:55.952  		'
00:34:55.952   19:19:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:34:55.952     19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:34:55.952     19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:34:55.952     19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob
00:34:55.952     19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:34:55.952     19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:34:55.952     19:19:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:34:55.952      19:19:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:55.952      19:19:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:55.952      19:19:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:55.952      19:19:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH
00:34:55.952      19:19:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:34:55.952    19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:34:55.953    19:19:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@460 -- # nvmf_veth_init
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:34:55.953  Cannot find device "nvmf_init_br"
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # true
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:34:55.953  Cannot find device "nvmf_init_br2"
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # true
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:34:55.953  Cannot find device "nvmf_tgt_br"
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # true
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:34:55.953  Cannot find device "nvmf_tgt_br2"
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # true
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:34:55.953  Cannot find device "nvmf_init_br"
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # true
00:34:55.953   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:34:56.210  Cannot find device "nvmf_init_br2"
00:34:56.210   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # true
00:34:56.210   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:34:56.210  Cannot find device "nvmf_tgt_br"
00:34:56.210   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # true
00:34:56.210   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:34:56.210  Cannot find device "nvmf_tgt_br2"
00:34:56.210   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # true
00:34:56.210   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:34:56.210  Cannot find device "nvmf_br"
00:34:56.210   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # true
00:34:56.210   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:34:56.210  Cannot find device "nvmf_init_if"
00:34:56.210   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # true
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:34:56.211  Cannot find device "nvmf_init_if2"
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # true
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:34:56.211  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # true
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:34:56.211  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # true
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:34:56.211   19:19:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:34:56.211   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:34:56.211   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:34:56.211   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:34:56.470  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:34:56.470  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms
00:34:56.470  
00:34:56.470  --- 10.0.0.3 ping statistics ---
00:34:56.470  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:56.470  rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:34:56.470  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:34:56.470  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms
00:34:56.470  
00:34:56.470  --- 10.0.0.4 ping statistics ---
00:34:56.470  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:56.470  rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:34:56.470  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:34:56.470  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms
00:34:56.470  
00:34:56.470  --- 10.0.0.1 ping statistics ---
00:34:56.470  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:56.470  rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:34:56.470  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:34:56.470  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms
00:34:56.470  
00:34:56.470  --- 10.0.0.2 ping statistics ---
00:34:56.470  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:56.470  rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@461 -- # return 0
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=128697
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 128697
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 128697 ']'
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:34:56.470  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable
00:34:56.470   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:34:56.470  [2024-12-13 19:19:28.215995] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:34:56.470  [2024-12-13 19:19:28.217281] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:34:56.470  [2024-12-13 19:19:28.217351] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:34:56.729  [2024-12-13 19:19:28.363825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:34:56.729  [2024-12-13 19:19:28.403499] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:34:56.729  [2024-12-13 19:19:28.403563] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:34:56.729  [2024-12-13 19:19:28.403572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:34:56.729  [2024-12-13 19:19:28.403580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:34:56.729  [2024-12-13 19:19:28.403586] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:34:56.729  [2024-12-13 19:19:28.407250] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:34:56.729  [2024-12-13 19:19:28.407288] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:34:56.729  [2024-12-13 19:19:28.529487] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:34:56.729  [2024-12-13 19:19:28.530018] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:34:56.729  [2024-12-13 19:19:28.530030] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio
00:34:56.988    19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]]
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000
00:34:56.988  5000+0 records in
00:34:56.988  5000+0 records out
00:34:56.988  10240000 bytes (10 MB, 9.8 MiB) copied, 0.0329284 s, 311 MB/s
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:34:56.988  AIO0
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:34:56.988  [2024-12-13 19:19:28.696215] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:34:56.988   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:56.989   19:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:34:56.989   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable
00:34:56.989   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:34:56.989  [2024-12-13 19:19:28.728330] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:34:56.989   19:19:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:34:56.989   19:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1}
00:34:56.989   19:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 128697 0
00:34:56.989   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 128697 0 idle
00:34:56.989   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=128697
00:34:56.989   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:34:56.989   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:34:56.989   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:34:56.989   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:34:56.989   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:34:56.989   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:34:56.989   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:34:56.989   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:34:56.989   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:34:56.989    19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 128697 -w 256
00:34:56.989    19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:34:57.248   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 128697 root      20   0   64.2g  46592  33536 S   0.0   0.4   0:00.28 reactor_0'
00:34:57.248    19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 128697 root 20 0 64.2g 46592 33536 S 0.0 0.4 0:00.28 reactor_0
00:34:57.248    19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:34:57.248    19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:34:57.248   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:34:57.248   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:34:57.248   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:34:57.248   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:34:57.248   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:34:57.248   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:34:57.248   19:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1}
00:34:57.248   19:19:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 128697 1
00:34:57.248   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 128697 1 idle
00:34:57.248   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=128697
00:34:57.248   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1
00:34:57.248   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:34:57.248   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:34:57.248   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:34:57.248   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:34:57.248   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:34:57.248   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:34:57.248   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:34:57.248   19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:34:57.248    19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1
00:34:57.248    19:19:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 128697 -w 256
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 128701 root      20   0   64.2g  46592  33536 S   0.0   0.4   0:00.00 reactor_1'
00:34:57.509    19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 128701 root 20 0 64.2g 46592 33536 S 0.0 0.4 0:00.00 reactor_1
00:34:57.509    19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:34:57.509    19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=128757
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1'
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1}
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 128697 0
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 128697 0 busy
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=128697
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]]
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:34:57.509    19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 128697 -w 256
00:34:57.509    19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 128697 root      20   0   64.2g  46592  33536 S   6.2   0.4   0:00.29 reactor_0'
00:34:57.509    19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 128697 root 20 0 64.2g 46592 33536 S 6.2 0.4 0:00.29 reactor_0
00:34:57.509    19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:34:57.509    19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]]
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold ))
00:34:57.509   19:19:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1
00:34:58.475   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- ))
00:34:58.475   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:34:58.475    19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 128697 -w 256
00:34:58.475    19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:34:58.734   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 128697 root      20   0   64.2g  47872  33920 R  99.9   0.4   0:01.78 reactor_0'
00:34:58.734    19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:34:58.734    19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 128697 root 20 0 64.2g 47872 33920 R 99.9 0.4 0:01.78 reactor_0
00:34:58.734    19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:34:58.734   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9
00:34:58.734   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99
00:34:58.734   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]]
00:34:58.734   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold ))
00:34:58.734   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]]
00:34:58.734   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:34:58.734   19:19:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1}
00:34:58.734   19:19:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30
00:34:58.734   19:19:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 128697 1
00:34:58.734   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 128697 1 busy
00:34:58.734   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=128697
00:34:58.734   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1
00:34:58.734   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy
00:34:58.734   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30
00:34:58.734   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:34:58.734   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]]
00:34:58.734   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:34:58.734   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:34:58.734   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:34:58.734    19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 128697 -w 256
00:34:58.734    19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1
00:34:58.993   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 128701 root      20   0   64.2g  47872  33920 R  75.0   0.4   0:00.89 reactor_1'
00:34:58.993    19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 128701 root 20 0 64.2g 47872 33920 R 75.0 0.4 0:00.89 reactor_1
00:34:58.993    19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:34:58.993    19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:34:58.993   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=75.0
00:34:58.993   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=75
00:34:58.993   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]]
00:34:58.993   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold ))
00:34:58.993   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]]
00:34:58.993   19:19:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:34:58.993   19:19:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 128757
00:35:08.973  Initializing NVMe Controllers
00:35:08.973  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1
00:35:08.973  Controller IO queue size 256, less than required.
00:35:08.973  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:35:08.973  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2
00:35:08.973  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3
00:35:08.973  Initialization complete. Launching workers.
00:35:08.973  ========================================================
00:35:08.973                                                                                                               Latency(us)
00:35:08.973  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:35:08.973  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:    5846.30      22.84   43862.18    7773.14   78446.63
00:35:08.973  TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:    4612.30      18.02   55627.35    7727.69   82083.11
00:35:08.973  ========================================================
00:35:08.973  Total                                                                    :   10458.60      40.85   49050.69    7727.69   82083.11
00:35:08.973  
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1}
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 128697 0
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 128697 0 idle
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=128697
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:35:08.973    19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 128697 -w 256
00:35:08.973    19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 128697 root      20   0   64.2g  47872  33920 R   6.2   0.4   0:14.70 reactor_0'
00:35:08.973    19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 128697 root 20 0 64.2g 47872 33920 R 6.2 0.4 0:14.70 reactor_0
00:35:08.973    19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:35:08.973    19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1}
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 128697 1
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 128697 1 idle
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=128697
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:35:08.973    19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 128697 -w 256
00:35:08.973    19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 128701 root      20   0   64.2g  47872  33920 S   0.0   0.4   0:07.24 reactor_1'
00:35:08.973    19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 128701 root 20 0 64.2g 47872 33920 S 0.0 0.4 0:07.24 reactor_1
00:35:08.973    19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:35:08.973    19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:35:08.973   19:19:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2
00:35:10.351   19:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:35:10.351    19:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:35:10.351    19:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:35:10.351   19:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:35:10.351   19:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:35:10.351   19:19:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0
00:35:10.351   19:19:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1}
00:35:10.351   19:19:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 128697 0
00:35:10.351   19:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 128697 0 idle
00:35:10.351   19:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=128697
00:35:10.351   19:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:35:10.351   19:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:35:10.351   19:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:35:10.351   19:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:35:10.351   19:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:35:10.351   19:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:35:10.351   19:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:35:10.352   19:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:35:10.352   19:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:35:10.352    19:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 128697 -w 256
00:35:10.352    19:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:35:10.352   19:19:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 128697 root      20   0   64.2g  49920  33920 S   0.0   0.4   0:14.76 reactor_0'
00:35:10.352    19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 128697 root 20 0 64.2g 49920 33920 S 0.0 0.4 0:14.76 reactor_0
00:35:10.352    19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:35:10.352    19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:35:10.352   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:35:10.352   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:35:10.352   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:35:10.352   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:35:10.352   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:35:10.352   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:35:10.352   19:19:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1}
00:35:10.352   19:19:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 128697 1
00:35:10.352   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 128697 1 idle
00:35:10.352   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=128697
00:35:10.352   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1
00:35:10.352   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:35:10.352   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:35:10.352   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:35:10.352   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:35:10.352   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:35:10.352   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:35:10.352   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:35:10.352   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:35:10.352    19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 128697 -w 256
00:35:10.352    19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1
00:35:10.611   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 128701 root      20   0   64.2g  49920  33920 S   0.0   0.4   0:07.26 reactor_1'
00:35:10.611    19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 128701 root 20 0 64.2g 49920 33920 S 0.0 0.4 0:07.26 reactor_1
00:35:10.611    19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:35:10.611    19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:35:10.611   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:35:10.611   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:35:10.611   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:35:10.611   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:35:10.611   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:35:10.611   19:19:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:35:10.611   19:19:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:35:10.611  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:35:10.611   19:19:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:35:10.611   19:19:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0
00:35:10.611   19:19:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:35:10.611   19:19:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:35:10.611   19:19:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:35:10.611   19:19:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:35:10.611   19:19:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0
00:35:10.611   19:19:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT
00:35:10.611   19:19:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini
00:35:10.611   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup
00:35:10.611   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync
00:35:10.870   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:35:10.870   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e
00:35:10.870   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20}
00:35:10.870   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:35:10.870  rmmod nvme_tcp
00:35:10.870  rmmod nvme_fabrics
00:35:10.870  rmmod nvme_keyring
00:35:10.870   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:35:10.870   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e
00:35:10.870   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0
00:35:10.870   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 128697 ']'
00:35:10.870   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 128697
00:35:10.870   19:19:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 128697 ']'
00:35:10.870   19:19:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 128697
00:35:10.870    19:19:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname
00:35:10.870   19:19:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:35:10.870    19:19:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 128697
00:35:10.870   19:19:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:35:10.870   19:19:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:35:10.870  killing process with pid 128697
00:35:10.870   19:19:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 128697'
00:35:10.870   19:19:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 128697
00:35:10.870   19:19:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 128697
00:35:11.129   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:35:11.129   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:35:11.129   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:35:11.129   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr
00:35:11.129   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save
00:35:11.129   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:35:11.129   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore
00:35:11.129   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:35:11.129   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:35:11.129   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:35:11.129   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:35:11.129   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:35:11.129   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:35:11.129   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:35:11.129   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:35:11.388   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:35:11.388   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:35:11.388   19:19:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:35:11.388   19:19:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:35:11.388   19:19:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:35:11.388   19:19:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:35:11.388   19:19:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:35:11.388   19:19:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@246 -- # remove_spdk_ns
00:35:11.388   19:19:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:35:11.388   19:19:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:35:11.388    19:19:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:35:11.388   19:19:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # return 0
00:35:11.388  ************************************
00:35:11.388  END TEST nvmf_interrupt
00:35:11.388  ************************************
00:35:11.388  
00:35:11.388  real	0m15.649s
00:35:11.388  user	0m28.957s
00:35:11.388  sys	0m7.708s
00:35:11.388   19:19:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable
00:35:11.388   19:19:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:35:11.388  
00:35:11.388  real	27m13.363s
00:35:11.388  user	80m10.590s
00:35:11.388  sys	6m6.085s
00:35:11.388   19:19:43 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:35:11.388  ************************************
00:35:11.388  END TEST nvmf_tcp
00:35:11.388   19:19:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:35:11.388  ************************************
00:35:11.647   19:19:43  -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]]
00:35:11.647   19:19:43  -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp
00:35:11.647   19:19:43  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:35:11.647   19:19:43  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:35:11.647   19:19:43  -- common/autotest_common.sh@10 -- # set +x
00:35:11.647  ************************************
00:35:11.647  START TEST spdkcli_nvmf_tcp
00:35:11.647  ************************************
00:35:11.647   19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp
00:35:11.647  * Looking for test storage...
00:35:11.647  * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli
00:35:11.647    19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:35:11.648     19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version
00:35:11.648     19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:35:11.648     19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1
00:35:11.648     19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1
00:35:11.648     19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:35:11.648     19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:35:11.648     19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2
00:35:11.648     19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2
00:35:11.648     19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:35:11.648     19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:35:11.648  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:11.648  		--rc genhtml_branch_coverage=1
00:35:11.648  		--rc genhtml_function_coverage=1
00:35:11.648  		--rc genhtml_legend=1
00:35:11.648  		--rc geninfo_all_blocks=1
00:35:11.648  		--rc geninfo_unexecuted_blocks=1
00:35:11.648  		
00:35:11.648  		'
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:35:11.648  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:11.648  		--rc genhtml_branch_coverage=1
00:35:11.648  		--rc genhtml_function_coverage=1
00:35:11.648  		--rc genhtml_legend=1
00:35:11.648  		--rc geninfo_all_blocks=1
00:35:11.648  		--rc geninfo_unexecuted_blocks=1
00:35:11.648  		
00:35:11.648  		'
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:35:11.648  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:11.648  		--rc genhtml_branch_coverage=1
00:35:11.648  		--rc genhtml_function_coverage=1
00:35:11.648  		--rc genhtml_legend=1
00:35:11.648  		--rc geninfo_all_blocks=1
00:35:11.648  		--rc geninfo_unexecuted_blocks=1
00:35:11.648  		
00:35:11.648  		'
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:35:11.648  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:11.648  		--rc genhtml_branch_coverage=1
00:35:11.648  		--rc genhtml_function_coverage=1
00:35:11.648  		--rc genhtml_legend=1
00:35:11.648  		--rc geninfo_all_blocks=1
00:35:11.648  		--rc geninfo_unexecuted_blocks=1
00:35:11.648  		
00:35:11.648  		'
00:35:11.648   19:19:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py
00:35:11.648   19:19:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:35:11.648     19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:35:11.648     19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:35:11.648     19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob
00:35:11.648     19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:35:11.648     19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:35:11.648     19:19:43 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:35:11.648      19:19:43 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:11.648      19:19:43 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:11.648      19:19:43 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:11.648      19:19:43 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH
00:35:11.648      19:19:43 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:35:11.648  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:35:11.648    19:19:43 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0
00:35:11.648   19:19:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test
00:35:11.648   19:19:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf
00:35:11.648   19:19:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT
00:35:11.648   19:19:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt
00:35:11.648   19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:35:11.648   19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:35:11.648   19:19:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt
00:35:11.648   19:19:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=129088
00:35:11.648   19:19:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 129088
00:35:11.648   19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 129088 ']'
00:35:11.648   19:19:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0
00:35:11.648   19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:35:11.648   19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100
00:35:11.648  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:35:11.648   19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:35:11.648   19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable
00:35:11.648   19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:35:11.907  [2024-12-13 19:19:43.534246] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:35:11.907  [2024-12-13 19:19:43.534355] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129088 ]
00:35:11.907  [2024-12-13 19:19:43.687615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:35:12.167  [2024-12-13 19:19:43.736714] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:35:12.167  [2024-12-13 19:19:43.736741] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:35:12.167   19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:35:12.167   19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0
00:35:12.167   19:19:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt
00:35:12.167   19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:35:12.167   19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:35:12.167   19:19:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1
00:35:12.167   19:19:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]]
00:35:12.167   19:19:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config
00:35:12.167   19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:35:12.167   19:19:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:35:12.167   19:19:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True
00:35:12.167  '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True
00:35:12.167  '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True
00:35:12.167  '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True
00:35:12.167  '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True
00:35:12.167  '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True
00:35:12.167  '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True
00:35:12.167  '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW  max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True
00:35:12.167  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True
00:35:12.167  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True
00:35:12.167  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create  tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True
00:35:12.167  '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True
00:35:12.167  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True
00:35:12.167  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create  tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True
00:35:12.167  '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True
00:35:12.167  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True
00:35:12.167  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create  tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True
00:35:12.167  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create  tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True
00:35:12.167  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create  nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True
00:35:12.167  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create  nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True
00:35:12.167  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\''
00:35:12.167  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True
00:35:12.167  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True
00:35:12.167  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True
00:35:12.167  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True
00:35:12.167  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True
00:35:12.167  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True
00:35:12.167  '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\''
00:35:12.167  '
00:35:15.453  [2024-12-13 19:19:46.653396] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:35:16.390  [2024-12-13 19:19:47.914896] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 ***
00:35:18.923  [2024-12-13 19:19:50.325455] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 ***
00:35:20.827  [2024-12-13 19:19:52.423785] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 ***
00:35:22.732  Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True]
00:35:22.732  Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True]
00:35:22.732  Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True]
00:35:22.732  Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True]
00:35:22.732  Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True]
00:35:22.732  Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True]
00:35:22.732  Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True]
00:35:22.732  Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW  max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True]
00:35:22.732  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True]
00:35:22.732  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True]
00:35:22.732  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create  tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True]
00:35:22.732  Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True]
00:35:22.732  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True]
00:35:22.732  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create  tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True]
00:35:22.732  Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True]
00:35:22.732  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True]
00:35:22.732  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create  tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True]
00:35:22.732  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create  tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True]
00:35:22.732  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create  nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True]
00:35:22.732  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create  nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True]
00:35:22.732  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False]
00:35:22.732  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True]
00:35:22.732  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True]
00:35:22.732  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True]
00:35:22.732  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True]
00:35:22.732  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True]
00:35:22.732  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True]
00:35:22.732  Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False]
00:35:22.732   19:19:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config
00:35:22.732   19:19:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:35:22.732   19:19:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:35:22.732   19:19:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match
00:35:22.732   19:19:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:35:22.732   19:19:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:35:22.732   19:19:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match
00:35:22.732   19:19:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf
00:35:22.991   19:19:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match
00:35:22.991   19:19:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test
00:35:22.991   19:19:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match
00:35:22.991   19:19:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:35:22.991   19:19:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:35:22.991   19:19:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config
00:35:22.991   19:19:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:35:22.991   19:19:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:35:22.991   19:19:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\''
00:35:22.991  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\''
00:35:22.991  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\''
00:35:22.991  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\''
00:35:22.991  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\''
00:35:22.991  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\''
00:35:22.991  '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\''
00:35:22.991  '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\''
00:35:22.991  '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\''
00:35:22.991  '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\''
00:35:22.991  '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\''
00:35:22.991  '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\''
00:35:22.991  '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\''
00:35:22.991  '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\''
00:35:22.991  '
00:35:29.557  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False]
00:35:29.557  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False]
00:35:29.557  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False]
00:35:29.557  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False]
00:35:29.557  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False]
00:35:29.557  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False]
00:35:29.557  Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False]
00:35:29.557  Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False]
00:35:29.557  Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False]
00:35:29.557  Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False]
00:35:29.557  Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False]
00:35:29.557  Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False]
00:35:29.557  Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False]
00:35:29.557  Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False]
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 129088
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 129088 ']'
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 129088
00:35:29.557    19:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:35:29.557    19:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 129088
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:35:29.557  killing process with pid 129088
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 129088'
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 129088
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 129088
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']'
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 129088 ']'
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 129088
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 129088 ']'
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 129088
00:35:29.557  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (129088) - No such process
00:35:29.557  Process with pid 129088 is not found
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 129088 is not found'
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']'
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']'
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio
00:35:29.557  
00:35:29.557  real	0m17.438s
00:35:29.557  user	0m37.506s
00:35:29.557  sys	0m0.910s
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:35:29.557  ************************************
00:35:29.557  END TEST spdkcli_nvmf_tcp
00:35:29.557  ************************************
00:35:29.557   19:20:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:35:29.557   19:20:00  -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp
00:35:29.557   19:20:00  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:35:29.557   19:20:00  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:35:29.557   19:20:00  -- common/autotest_common.sh@10 -- # set +x
00:35:29.557  ************************************
00:35:29.557  START TEST nvmf_identify_passthru
00:35:29.557  ************************************
00:35:29.557   19:20:00 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp
00:35:29.557  * Looking for test storage...
00:35:29.557  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:35:29.557    19:20:00 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:35:29.557     19:20:00 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version
00:35:29.557     19:20:00 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:35:29.557    19:20:00 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:35:29.557    19:20:00 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:35:29.557    19:20:00 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l
00:35:29.557    19:20:00 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l
00:35:29.557    19:20:00 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-:
00:35:29.557    19:20:00 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1
00:35:29.557    19:20:00 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-:
00:35:29.557    19:20:00 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2
00:35:29.557    19:20:00 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<'
00:35:29.557    19:20:00 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2
00:35:29.557    19:20:00 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1
00:35:29.557    19:20:00 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:35:29.557    19:20:00 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in
00:35:29.557    19:20:00 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1
00:35:29.557    19:20:00 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 ))
00:35:29.557    19:20:00 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:35:29.557     19:20:00 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1
00:35:29.557     19:20:00 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1
00:35:29.557     19:20:00 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:35:29.557     19:20:00 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1
00:35:29.557    19:20:00 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1
00:35:29.557     19:20:00 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2
00:35:29.557     19:20:00 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2
00:35:29.557     19:20:00 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:35:29.558     19:20:00 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2
00:35:29.558    19:20:00 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2
00:35:29.558    19:20:00 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:35:29.558    19:20:00 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:35:29.558    19:20:00 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0
00:35:29.558    19:20:00 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:35:29.558    19:20:00 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:35:29.558  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:29.558  		--rc genhtml_branch_coverage=1
00:35:29.558  		--rc genhtml_function_coverage=1
00:35:29.558  		--rc genhtml_legend=1
00:35:29.558  		--rc geninfo_all_blocks=1
00:35:29.558  		--rc geninfo_unexecuted_blocks=1
00:35:29.558  		
00:35:29.558  		'
00:35:29.558    19:20:00 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:35:29.558  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:29.558  		--rc genhtml_branch_coverage=1
00:35:29.558  		--rc genhtml_function_coverage=1
00:35:29.558  		--rc genhtml_legend=1
00:35:29.558  		--rc geninfo_all_blocks=1
00:35:29.558  		--rc geninfo_unexecuted_blocks=1
00:35:29.558  		
00:35:29.558  		'
00:35:29.558    19:20:00 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:35:29.558  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:29.558  		--rc genhtml_branch_coverage=1
00:35:29.558  		--rc genhtml_function_coverage=1
00:35:29.558  		--rc genhtml_legend=1
00:35:29.558  		--rc geninfo_all_blocks=1
00:35:29.558  		--rc geninfo_unexecuted_blocks=1
00:35:29.558  		
00:35:29.558  		'
00:35:29.558    19:20:00 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:35:29.558  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:29.558  		--rc genhtml_branch_coverage=1
00:35:29.558  		--rc genhtml_function_coverage=1
00:35:29.558  		--rc genhtml_legend=1
00:35:29.558  		--rc geninfo_all_blocks=1
00:35:29.558  		--rc geninfo_unexecuted_blocks=1
00:35:29.558  		
00:35:29.558  		'
00:35:29.558   19:20:00 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:35:29.558     19:20:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:35:29.558     19:20:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:35:29.558     19:20:00 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob
00:35:29.558     19:20:00 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:35:29.558     19:20:00 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:35:29.558     19:20:00 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:35:29.558      19:20:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:29.558      19:20:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:29.558      19:20:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:29.558      19:20:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH
00:35:29.558      19:20:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:35:29.558  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:35:29.558    19:20:00 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0
00:35:29.558   19:20:00 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:35:29.558    19:20:00 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob
00:35:29.558    19:20:00 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:35:29.558    19:20:00 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:35:29.558    19:20:00 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:35:29.558     19:20:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:29.558     19:20:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:29.558     19:20:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:29.558     19:20:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH
00:35:29.558     19:20:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:29.558   19:20:00 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit
00:35:29.558   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:35:29.558   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:35:29.558   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs
00:35:29.558   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no
00:35:29.558   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns
00:35:29.558   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:35:29.558   19:20:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:35:29.558    19:20:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:35:29.558   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:35:29.558   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:35:29.558   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:35:29.558   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:35:29.558   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:35:29.558   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@460 -- # nvmf_veth_init
00:35:29.558   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:35:29.558   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:35:29.558   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:35:29.558   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:35:29.558   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:35:29.558   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:35:29.558   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:35:29.558   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:35:29.558   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:35:29.558   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:35:29.559   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:35:29.559   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:35:29.559   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:35:29.559   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:35:29.559   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:35:29.559   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:35:29.559   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:35:29.559  Cannot find device "nvmf_init_br"
00:35:29.559   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true
00:35:29.559   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:35:29.559  Cannot find device "nvmf_init_br2"
00:35:29.559   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true
00:35:29.559   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:35:29.559  Cannot find device "nvmf_tgt_br"
00:35:29.559   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@164 -- # true
00:35:29.559   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:35:29.559  Cannot find device "nvmf_tgt_br2"
00:35:29.559   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@165 -- # true
00:35:29.559   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:35:29.559  Cannot find device "nvmf_init_br"
00:35:29.559   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@166 -- # true
00:35:29.559   19:20:00 nvmf_identify_passthru -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:35:29.559  Cannot find device "nvmf_init_br2"
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@167 -- # true
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:35:29.559  Cannot find device "nvmf_tgt_br"
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@168 -- # true
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:35:29.559  Cannot find device "nvmf_tgt_br2"
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@169 -- # true
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:35:29.559  Cannot find device "nvmf_br"
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@170 -- # true
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:35:29.559  Cannot find device "nvmf_init_if"
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@171 -- # true
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:35:29.559  Cannot find device "nvmf_init_if2"
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@172 -- # true
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:35:29.559  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@173 -- # true
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:35:29.559  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@174 -- # true
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:35:29.559  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:35:29.559  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms
00:35:29.559  
00:35:29.559  --- 10.0.0.3 ping statistics ---
00:35:29.559  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:35:29.559  rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:35:29.559  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:35:29.559  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms
00:35:29.559  
00:35:29.559  --- 10.0.0.4 ping statistics ---
00:35:29.559  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:35:29.559  rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:35:29.559  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:35:29.559  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms
00:35:29.559  
00:35:29.559  --- 10.0.0.1 ping statistics ---
00:35:29.559  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:35:29.559  rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:35:29.559  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:35:29.559  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms
00:35:29.559  
00:35:29.559  --- 10.0.0.2 ping statistics ---
00:35:29.559  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:35:29.559  rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@461 -- # return 0
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:35:29.559   19:20:01 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:35:29.559   19:20:01 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify
00:35:29.559   19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable
00:35:29.559   19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:35:29.559    19:20:01 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf
00:35:29.559    19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=()
00:35:29.559    19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs
00:35:29.559    19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs))
00:35:29.559     19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs
00:35:29.559     19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=()
00:35:29.559     19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs
00:35:29.559     19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:35:29.559      19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:35:29.559      19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:35:29.818     19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 2 == 0 ))
00:35:29.818     19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0
00:35:29.818    19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0
00:35:29.818   19:20:01 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0
00:35:29.818   19:20:01 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']'
00:35:29.818    19:20:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0
00:35:29.818    19:20:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:'
00:35:29.818    19:20:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}'
00:35:29.818   19:20:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340
00:35:29.818    19:20:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0
00:35:29.818    19:20:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:'
00:35:29.818    19:20:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}'
00:35:30.077   19:20:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU
00:35:30.077   19:20:01 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify
00:35:30.077   19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable
00:35:30.077   19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:35:30.077   19:20:01 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt
00:35:30.077   19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable
00:35:30.077   19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:35:30.077   19:20:01 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=129584
00:35:30.077   19:20:01 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:35:30.077   19:20:01 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc
00:35:30.077   19:20:01 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 129584
00:35:30.077   19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 129584 ']'
00:35:30.077   19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:35:30.077   19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100
00:35:30.077   19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:35:30.077  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:35:30.077   19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable
00:35:30.077   19:20:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:35:30.077  [2024-12-13 19:20:01.889474] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:35:30.077  [2024-12-13 19:20:01.889581] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:35:30.335  [2024-12-13 19:20:02.047541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:35:30.335  [2024-12-13 19:20:02.096065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:35:30.335  [2024-12-13 19:20:02.096165] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:35:30.335  [2024-12-13 19:20:02.096181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:35:30.335  [2024-12-13 19:20:02.096192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:35:30.335  [2024-12-13 19:20:02.096202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:35:30.335  [2024-12-13 19:20:02.097888] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:35:30.335  [2024-12-13 19:20:02.098055] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:35:30.335  [2024-12-13 19:20:02.098176] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:35:30.335  [2024-12-13 19:20:02.098550] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:35:31.269   19:20:02 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:35:31.269   19:20:02 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0
00:35:31.269   19:20:02 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr
00:35:31.269   19:20:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:31.269   19:20:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:35:31.269   19:20:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:31.269   19:20:02 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init
00:35:31.269   19:20:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:31.269   19:20:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:35:31.269  [2024-12-13 19:20:02.984546] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled
00:35:31.269   19:20:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:31.269   19:20:02 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:35:31.269   19:20:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:31.269   19:20:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:35:31.269  [2024-12-13 19:20:02.999343] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:35:31.269   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:31.269   19:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt
00:35:31.269   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable
00:35:31.269   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:35:31.269   19:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0
00:35:31.269   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:31.269   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:35:31.528  Nvme0n1
00:35:31.528   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:31.528   19:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1
00:35:31.528   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:31.528   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:35:31.528   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:31.528   19:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1
00:35:31.528   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:31.528   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:35:31.528   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:31.528   19:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:35:31.528   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:31.528   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:35:31.528  [2024-12-13 19:20:03.156379] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:35:31.528   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:31.528   19:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems
00:35:31.528   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:31.528   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:35:31.528  [
00:35:31.528  {
00:35:31.528  "allow_any_host": true,
00:35:31.528  "hosts": [],
00:35:31.528  "listen_addresses": [],
00:35:31.528  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:35:31.528  "subtype": "Discovery"
00:35:31.528  },
00:35:31.528  {
00:35:31.528  "allow_any_host": true,
00:35:31.528  "hosts": [],
00:35:31.528  "listen_addresses": [
00:35:31.528  {
00:35:31.528  "adrfam": "IPv4",
00:35:31.528  "traddr": "10.0.0.3",
00:35:31.528  "trsvcid": "4420",
00:35:31.528  "trtype": "TCP"
00:35:31.528  }
00:35:31.528  ],
00:35:31.528  "max_cntlid": 65519,
00:35:31.528  "max_namespaces": 1,
00:35:31.528  "min_cntlid": 1,
00:35:31.528  "model_number": "SPDK bdev Controller",
00:35:31.528  "namespaces": [
00:35:31.528  {
00:35:31.528  "bdev_name": "Nvme0n1",
00:35:31.528  "name": "Nvme0n1",
00:35:31.528  "nguid": "5EBBDC32038F4E41A763022949A42514",
00:35:31.528  "nsid": 1,
00:35:31.528  "uuid": "5ebbdc32-038f-4e41-a763-022949a42514"
00:35:31.528  }
00:35:31.528  ],
00:35:31.528  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:35:31.528  "serial_number": "SPDK00000000000001",
00:35:31.528  "subtype": "NVMe"
00:35:31.528  }
00:35:31.528  ]
00:35:31.528   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:31.528    19:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r '        trtype:tcp         adrfam:IPv4         traddr:10.0.0.3         trsvcid:4420         subnqn:nqn.2016-06.io.spdk:cnode1'
00:35:31.528    19:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:'
00:35:31.528    19:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}'
00:35:31.786   19:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340
00:35:31.786    19:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:'
00:35:31.786    19:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r '        trtype:tcp         adrfam:IPv4         traddr:10.0.0.3         trsvcid:4420         subnqn:nqn.2016-06.io.spdk:cnode1'
00:35:31.786    19:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}'
00:35:32.045   19:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU
00:35:32.045   19:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']'
00:35:32.045   19:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']'
00:35:32.045   19:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:35:32.045   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:32.045   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:35:32.045   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:32.045   19:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT
00:35:32.045   19:20:03 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini
00:35:32.045   19:20:03 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup
00:35:32.045   19:20:03 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync
00:35:32.045   19:20:03 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:35:32.045   19:20:03 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e
00:35:32.045   19:20:03 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20}
00:35:32.045   19:20:03 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:35:32.045  rmmod nvme_tcp
00:35:32.045  rmmod nvme_fabrics
00:35:32.045  rmmod nvme_keyring
00:35:32.045   19:20:03 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:35:32.045   19:20:03 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e
00:35:32.045   19:20:03 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0
00:35:32.045   19:20:03 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 129584 ']'
00:35:32.045   19:20:03 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 129584
00:35:32.045   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 129584 ']'
00:35:32.045   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 129584
00:35:32.045    19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname
00:35:32.045   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:35:32.045    19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 129584
00:35:32.045   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:35:32.045   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:35:32.045  killing process with pid 129584
00:35:32.045   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 129584'
00:35:32.045   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 129584
00:35:32.045   19:20:03 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 129584
00:35:32.304   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:35:32.304   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:35:32.304   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:35:32.304   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr
00:35:32.304   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save
00:35:32.304   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:35:32.304   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore
00:35:32.304   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:35:32.304   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:35:32.304   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:35:32.304   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:35:32.304   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:35:32.304   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:35:32.304   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:35:32.304   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:35:32.304   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:35:32.304   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:35:32.304   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:35:32.565   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:35:32.565   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:35:32.565   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:35:32.565   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:35:32.565   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@246 -- # remove_spdk_ns
00:35:32.565   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:35:32.565   19:20:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:35:32.565    19:20:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:35:32.565   19:20:04 nvmf_identify_passthru -- nvmf/common.sh@300 -- # return 0
00:35:32.565  
00:35:32.565  real	0m3.581s
00:35:32.565  user	0m8.275s
00:35:32.565  sys	0m0.996s
00:35:32.565   19:20:04 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable
00:35:32.565   19:20:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:35:32.565  ************************************
00:35:32.565  END TEST nvmf_identify_passthru
00:35:32.565  ************************************
00:35:32.565   19:20:04  -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh
00:35:32.565   19:20:04  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:35:32.565   19:20:04  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:35:32.565   19:20:04  -- common/autotest_common.sh@10 -- # set +x
00:35:32.565  ************************************
00:35:32.565  START TEST nvmf_dif
00:35:32.565  ************************************
00:35:32.565   19:20:04 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh
00:35:32.841  * Looking for test storage...
00:35:32.841  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:35:32.841    19:20:04 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:35:32.841     19:20:04 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version
00:35:32.841     19:20:04 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:35:32.841    19:20:04 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:35:32.841    19:20:04 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:35:32.841    19:20:04 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l
00:35:32.841    19:20:04 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l
00:35:32.841    19:20:04 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-:
00:35:32.841    19:20:04 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1
00:35:32.841    19:20:04 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-:
00:35:32.841    19:20:04 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2
00:35:32.841    19:20:04 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<'
00:35:32.841    19:20:04 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2
00:35:32.841    19:20:04 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1
00:35:32.841    19:20:04 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:35:32.841    19:20:04 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in
00:35:32.841    19:20:04 nvmf_dif -- scripts/common.sh@345 -- # : 1
00:35:32.841    19:20:04 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 ))
00:35:32.841    19:20:04 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:35:32.841     19:20:04 nvmf_dif -- scripts/common.sh@365 -- # decimal 1
00:35:32.841     19:20:04 nvmf_dif -- scripts/common.sh@353 -- # local d=1
00:35:32.841     19:20:04 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:35:32.841     19:20:04 nvmf_dif -- scripts/common.sh@355 -- # echo 1
00:35:32.841    19:20:04 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1
00:35:32.841     19:20:04 nvmf_dif -- scripts/common.sh@366 -- # decimal 2
00:35:32.841     19:20:04 nvmf_dif -- scripts/common.sh@353 -- # local d=2
00:35:32.841     19:20:04 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:35:32.841     19:20:04 nvmf_dif -- scripts/common.sh@355 -- # echo 2
00:35:32.841    19:20:04 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2
00:35:32.841    19:20:04 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:35:32.841    19:20:04 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:35:32.841    19:20:04 nvmf_dif -- scripts/common.sh@368 -- # return 0
00:35:32.841    19:20:04 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:35:32.841    19:20:04 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:35:32.841  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:32.841  		--rc genhtml_branch_coverage=1
00:35:32.841  		--rc genhtml_function_coverage=1
00:35:32.841  		--rc genhtml_legend=1
00:35:32.841  		--rc geninfo_all_blocks=1
00:35:32.841  		--rc geninfo_unexecuted_blocks=1
00:35:32.841  		
00:35:32.841  		'
00:35:32.841    19:20:04 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:35:32.841  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:32.841  		--rc genhtml_branch_coverage=1
00:35:32.841  		--rc genhtml_function_coverage=1
00:35:32.841  		--rc genhtml_legend=1
00:35:32.841  		--rc geninfo_all_blocks=1
00:35:32.841  		--rc geninfo_unexecuted_blocks=1
00:35:32.841  		
00:35:32.841  		'
00:35:32.841    19:20:04 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:35:32.841  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:32.841  		--rc genhtml_branch_coverage=1
00:35:32.841  		--rc genhtml_function_coverage=1
00:35:32.841  		--rc genhtml_legend=1
00:35:32.841  		--rc geninfo_all_blocks=1
00:35:32.841  		--rc geninfo_unexecuted_blocks=1
00:35:32.841  		
00:35:32.841  		'
00:35:32.841    19:20:04 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:35:32.841  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:32.841  		--rc genhtml_branch_coverage=1
00:35:32.841  		--rc genhtml_function_coverage=1
00:35:32.841  		--rc genhtml_legend=1
00:35:32.841  		--rc geninfo_all_blocks=1
00:35:32.841  		--rc geninfo_unexecuted_blocks=1
00:35:32.841  		
00:35:32.841  		'
00:35:32.841   19:20:04 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:35:32.841     19:20:04 nvmf_dif -- nvmf/common.sh@7 -- # uname -s
00:35:32.841    19:20:04 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:35:32.841    19:20:04 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:35:32.841    19:20:04 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:35:32.841    19:20:04 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:35:32.841    19:20:04 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:35:32.841    19:20:04 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:35:32.841    19:20:04 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:35:32.841    19:20:04 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:35:32.841    19:20:04 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:35:32.841     19:20:04 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:35:32.841    19:20:04 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:35:32.841    19:20:04 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:35:32.841    19:20:04 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:35:32.842    19:20:04 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:35:32.842    19:20:04 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:35:32.842    19:20:04 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:35:32.842    19:20:04 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:35:32.842     19:20:04 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob
00:35:32.842     19:20:04 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:35:32.842     19:20:04 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:35:32.842     19:20:04 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:35:32.842      19:20:04 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:32.842      19:20:04 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:32.842      19:20:04 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:32.842      19:20:04 nvmf_dif -- paths/export.sh@5 -- # export PATH
00:35:32.842      19:20:04 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:32.842    19:20:04 nvmf_dif -- nvmf/common.sh@51 -- # : 0
00:35:32.842    19:20:04 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:35:32.842    19:20:04 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:35:32.842    19:20:04 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:35:32.842    19:20:04 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:35:32.842    19:20:04 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:35:32.842    19:20:04 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:35:32.842  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:35:32.842    19:20:04 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:35:32.842    19:20:04 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:35:32.842    19:20:04 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0
00:35:32.842   19:20:04 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16
00:35:32.842   19:20:04 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512
00:35:32.842   19:20:04 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64
00:35:32.842   19:20:04 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1
00:35:32.842   19:20:04 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:35:32.842   19:20:04 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:35:32.842    19:20:04 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:35:32.842  Cannot find device "nvmf_init_br"
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@162 -- # true
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:35:32.842  Cannot find device "nvmf_init_br2"
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@163 -- # true
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:35:32.842  Cannot find device "nvmf_tgt_br"
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@164 -- # true
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:35:32.842  Cannot find device "nvmf_tgt_br2"
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@165 -- # true
00:35:32.842   19:20:04 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:35:32.842  Cannot find device "nvmf_init_br"
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@166 -- # true
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:35:33.114  Cannot find device "nvmf_init_br2"
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@167 -- # true
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:35:33.114  Cannot find device "nvmf_tgt_br"
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@168 -- # true
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:35:33.114  Cannot find device "nvmf_tgt_br2"
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@169 -- # true
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:35:33.114  Cannot find device "nvmf_br"
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@170 -- # true
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:35:33.114  Cannot find device "nvmf_init_if"
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@171 -- # true
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:35:33.114  Cannot find device "nvmf_init_if2"
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@172 -- # true
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:35:33.114  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@173 -- # true
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:35:33.114  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@174 -- # true
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:35:33.114   19:20:04 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:35:33.115   19:20:04 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:35:33.115   19:20:04 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:35:33.115   19:20:04 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:35:33.115   19:20:04 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:35:33.115  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:35:33.115  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms
00:35:33.115  
00:35:33.115  --- 10.0.0.3 ping statistics ---
00:35:33.115  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:35:33.115  rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms
00:35:33.115   19:20:04 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:35:33.115  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:35:33.115  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms
00:35:33.115  
00:35:33.115  --- 10.0.0.4 ping statistics ---
00:35:33.115  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:35:33.115  rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms
00:35:33.115   19:20:04 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:35:33.373  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:35:33.373  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms
00:35:33.373  
00:35:33.373  --- 10.0.0.1 ping statistics ---
00:35:33.373  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:35:33.373  rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms
00:35:33.373   19:20:04 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:35:33.373  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:35:33.373  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms
00:35:33.373  
00:35:33.373  --- 10.0.0.2 ping statistics ---
00:35:33.373  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:35:33.373  rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms
00:35:33.373   19:20:04 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:35:33.373   19:20:04 nvmf_dif -- nvmf/common.sh@461 -- # return 0
00:35:33.373   19:20:04 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']'
00:35:33.373   19:20:04 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:35:33.631  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:35:33.631  0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver
00:35:33.631  0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver
00:35:33.631   19:20:05 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:35:33.631   19:20:05 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:35:33.631   19:20:05 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:35:33.631   19:20:05 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:35:33.631   19:20:05 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:35:33.631   19:20:05 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:35:33.631   19:20:05 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip'
00:35:33.631   19:20:05 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart
00:35:33.631   19:20:05 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:35:33.631   19:20:05 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable
00:35:33.631   19:20:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:35:33.631   19:20:05 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=129991
00:35:33.631   19:20:05 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF
00:35:33.631   19:20:05 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 129991
00:35:33.631   19:20:05 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 129991 ']'
00:35:33.631   19:20:05 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:35:33.631   19:20:05 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100
00:35:33.631   19:20:05 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:35:33.631  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:35:33.631   19:20:05 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable
00:35:33.631   19:20:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:35:33.631  [2024-12-13 19:20:05.436819] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:35:33.631  [2024-12-13 19:20:05.436912] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:35:33.889  [2024-12-13 19:20:05.592745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:35:33.889  [2024-12-13 19:20:05.642094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:35:33.889  [2024-12-13 19:20:05.642172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:35:33.889  [2024-12-13 19:20:05.642188] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:35:33.889  [2024-12-13 19:20:05.642199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:35:33.889  [2024-12-13 19:20:05.642209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:35:33.889  [2024-12-13 19:20:05.642711] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:35:34.147   19:20:05 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:35:34.147   19:20:05 nvmf_dif -- common/autotest_common.sh@868 -- # return 0
00:35:34.147   19:20:05 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:35:34.147   19:20:05 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable
00:35:34.147   19:20:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:35:34.147   19:20:05 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:35:34.147   19:20:05 nvmf_dif -- target/dif.sh@139 -- # create_transport
00:35:34.147   19:20:05 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip
00:35:34.147   19:20:05 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:34.147   19:20:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:35:34.147  [2024-12-13 19:20:05.867308] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:35:34.147   19:20:05 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:34.147   19:20:05 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1
00:35:34.147   19:20:05 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:35:34.147   19:20:05 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable
00:35:34.147   19:20:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:35:34.147  ************************************
00:35:34.147  START TEST fio_dif_1_default
00:35:34.147  ************************************
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@"
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x
00:35:34.147  bdev_null0
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x
00:35:34.147  [2024-12-13 19:20:05.911516] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62
00:35:34.147    19:20:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0
00:35:34.147    19:20:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0
00:35:34.147    19:20:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=()
00:35:34.147    19:20:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config
00:35:34.147    19:20:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:35:34.147   19:20:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:35:34.147    19:20:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf
00:35:34.147    19:20:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:35:34.147  {
00:35:34.147    "params": {
00:35:34.147      "name": "Nvme$subsystem",
00:35:34.147      "trtype": "$TEST_TRANSPORT",
00:35:34.147      "traddr": "$NVMF_FIRST_TARGET_IP",
00:35:34.147      "adrfam": "ipv4",
00:35:34.147      "trsvcid": "$NVMF_PORT",
00:35:34.147      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:35:34.147      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:35:34.148      "hdgst": ${hdgst:-false},
00:35:34.148      "ddgst": ${ddgst:-false}
00:35:34.148    },
00:35:34.148    "method": "bdev_nvme_attach_controller"
00:35:34.148  }
00:35:34.148  EOF
00:35:34.148  )")
00:35:34.148   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:35:34.148    19:20:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file
00:35:34.148    19:20:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat
00:35:34.148   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:35:34.148   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:35:34.148   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers
00:35:34.148   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:35:34.148   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift
00:35:34.148   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib=
00:35:34.148   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:35:34.148     19:20:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat
00:35:34.148    19:20:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 ))
00:35:34.148    19:20:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files ))
00:35:34.148    19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:35:34.148    19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan
00:35:34.148    19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:35:34.148    19:20:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq .
00:35:34.148     19:20:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=,
00:35:34.148     19:20:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:35:34.148    "params": {
00:35:34.148      "name": "Nvme0",
00:35:34.148      "trtype": "tcp",
00:35:34.148      "traddr": "10.0.0.3",
00:35:34.148      "adrfam": "ipv4",
00:35:34.148      "trsvcid": "4420",
00:35:34.148      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:35:34.148      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:35:34.148      "hdgst": false,
00:35:34.148      "ddgst": false
00:35:34.148    },
00:35:34.148    "method": "bdev_nvme_attach_controller"
00:35:34.148  }'
00:35:34.148   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=
00:35:34.148   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:35:34.148   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:35:34.148    19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:35:34.148    19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:35:34.148    19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:35:34.405   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=
00:35:34.405   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:35:34.405   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:35:34.405   19:20:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:35:34.405  filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4
00:35:34.405  fio-3.35
00:35:34.405  Starting 1 thread
00:35:46.605  
00:35:46.605  filename0: (groupid=0, jobs=1): err= 0: pid=130061: Fri Dec 13 19:20:16 2024
00:35:46.605    read: IOPS=3485, BW=13.6MiB/s (14.3MB/s)(136MiB/10001msec)
00:35:46.605      slat (nsec): min=5936, max=39006, avg=7037.17, stdev=2390.57
00:35:46.605      clat (usec): min=356, max=42444, avg=1126.90, stdev=5415.60
00:35:46.605       lat (usec): min=362, max=42454, avg=1133.94, stdev=5415.74
00:35:46.605      clat percentiles (usec):
00:35:46.605       |  1.00th=[  363],  5.00th=[  367], 10.00th=[  371], 20.00th=[  375],
00:35:46.605       | 30.00th=[  379], 40.00th=[  383], 50.00th=[  388], 60.00th=[  392],
00:35:46.605       | 70.00th=[  400], 80.00th=[  404], 90.00th=[  416], 95.00th=[  433],
00:35:46.605       | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681],
00:35:46.605       | 99.99th=[42206]
00:35:46.605     bw (  KiB/s): min= 7008, max=23136, per=100.00%, avg=14053.05, stdev=3979.01, samples=19
00:35:46.605     iops        : min= 1752, max= 5784, avg=3513.26, stdev=994.75, samples=19
00:35:46.605    lat (usec)   : 500=97.91%, 750=0.25%, 1000=0.01%
00:35:46.605    lat (msec)   : 4=0.01%, 50=1.81%
00:35:46.605    cpu          : usr=89.27%, sys=9.17%, ctx=9, majf=0, minf=0
00:35:46.605    IO depths    : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:35:46.605       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:35:46.605       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:35:46.605       issued rwts: total=34856,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:35:46.605       latency   : target=0, window=0, percentile=100.00%, depth=4
00:35:46.605  
00:35:46.605  Run status group 0 (all jobs):
00:35:46.605     READ: bw=13.6MiB/s (14.3MB/s), 13.6MiB/s-13.6MiB/s (14.3MB/s-14.3MB/s), io=136MiB (143MB), run=10001-10001msec
00:35:46.605   19:20:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0
00:35:46.605   19:20:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub
00:35:46.605   19:20:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@"
00:35:46.605   19:20:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0
00:35:46.605   19:20:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0
00:35:46.605   19:20:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:35:46.605   19:20:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:46.605   19:20:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x
00:35:46.605   19:20:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:46.606   19:20:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:35:46.606   19:20:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:46.606   19:20:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x
00:35:46.606  ************************************
00:35:46.606  END TEST fio_dif_1_default
00:35:46.606  ************************************
00:35:46.606   19:20:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:46.606  
00:35:46.606  real	0m11.050s
00:35:46.606  user	0m9.578s
00:35:46.606  sys	0m1.209s
00:35:46.606   19:20:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable
00:35:46.606   19:20:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x
00:35:46.606   19:20:16 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems
00:35:46.606   19:20:16 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:35:46.606   19:20:16 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable
00:35:46.606   19:20:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:35:46.606  ************************************
00:35:46.606  START TEST fio_dif_1_multi_subsystems
00:35:46.606  ************************************
00:35:46.606   19:20:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems
00:35:46.606   19:20:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1
00:35:46.606   19:20:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1
00:35:46.606   19:20:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub
00:35:46.606   19:20:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@"
00:35:46.606   19:20:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0
00:35:46.606   19:20:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0
00:35:46.606   19:20:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1
00:35:46.606   19:20:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:46.606   19:20:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:35:46.606  bdev_null0
00:35:46.606   19:20:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:46.606   19:20:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:35:46.606   19:20:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:46.606   19:20:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:35:46.606  [2024-12-13 19:20:17.015919] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@"
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:35:46.606  bdev_null1
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:46.606    19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62
00:35:46.606    19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1
00:35:46.606    19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=()
00:35:46.606    19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config
00:35:46.606    19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:35:46.606    19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:35:46.606  {
00:35:46.606    "params": {
00:35:46.606      "name": "Nvme$subsystem",
00:35:46.606      "trtype": "$TEST_TRANSPORT",
00:35:46.606      "traddr": "$NVMF_FIRST_TARGET_IP",
00:35:46.606      "adrfam": "ipv4",
00:35:46.606      "trsvcid": "$NVMF_PORT",
00:35:46.606      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:35:46.606      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:35:46.606      "hdgst": ${hdgst:-false},
00:35:46.606      "ddgst": ${ddgst:-false}
00:35:46.606    },
00:35:46.606    "method": "bdev_nvme_attach_controller"
00:35:46.606  }
00:35:46.606  EOF
00:35:46.606  )")
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:35:46.606    19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf
00:35:46.606    19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file
00:35:46.606    19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:35:46.606     19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib=
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:35:46.606    19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 ))
00:35:46.606    19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files ))
00:35:46.606    19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat
00:35:46.606    19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:35:46.606    19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:35:46.606  {
00:35:46.606    "params": {
00:35:46.606      "name": "Nvme$subsystem",
00:35:46.606      "trtype": "$TEST_TRANSPORT",
00:35:46.606      "traddr": "$NVMF_FIRST_TARGET_IP",
00:35:46.606      "adrfam": "ipv4",
00:35:46.606      "trsvcid": "$NVMF_PORT",
00:35:46.606      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:35:46.606      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:35:46.606      "hdgst": ${hdgst:-false},
00:35:46.606      "ddgst": ${ddgst:-false}
00:35:46.606    },
00:35:46.606    "method": "bdev_nvme_attach_controller"
00:35:46.606  }
00:35:46.606  EOF
00:35:46.606  )")
00:35:46.606    19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:35:46.606     19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat
00:35:46.606    19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan
00:35:46.606    19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ ))
00:35:46.606    19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files ))
00:35:46.606    19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:35:46.606    19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq .
00:35:46.606     19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=,
00:35:46.606     19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:35:46.606    "params": {
00:35:46.606      "name": "Nvme0",
00:35:46.606      "trtype": "tcp",
00:35:46.606      "traddr": "10.0.0.3",
00:35:46.606      "adrfam": "ipv4",
00:35:46.606      "trsvcid": "4420",
00:35:46.606      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:35:46.606      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:35:46.606      "hdgst": false,
00:35:46.606      "ddgst": false
00:35:46.606    },
00:35:46.606    "method": "bdev_nvme_attach_controller"
00:35:46.606  },{
00:35:46.606    "params": {
00:35:46.606      "name": "Nvme1",
00:35:46.606      "trtype": "tcp",
00:35:46.606      "traddr": "10.0.0.3",
00:35:46.606      "adrfam": "ipv4",
00:35:46.606      "trsvcid": "4420",
00:35:46.606      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:35:46.606      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:35:46.606      "hdgst": false,
00:35:46.606      "ddgst": false
00:35:46.606    },
00:35:46.606    "method": "bdev_nvme_attach_controller"
00:35:46.606  }'
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:35:46.606    19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:35:46.606    19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:35:46.606    19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:35:46.606   19:20:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:35:46.606  filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4
00:35:46.606  filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4
00:35:46.606  fio-3.35
00:35:46.606  Starting 2 threads
00:35:56.574  
00:35:56.574  filename0: (groupid=0, jobs=1): err= 0: pid=130223: Fri Dec 13 19:20:27 2024
00:35:56.574    read: IOPS=230, BW=924KiB/s (946kB/s)(9248KiB/10011msec)
00:35:56.574      slat (nsec): min=6008, max=46603, avg=7531.31, stdev=2817.63
00:35:56.574      clat (usec): min=354, max=41551, avg=17296.68, stdev=19957.68
00:35:56.574       lat (usec): min=360, max=41562, avg=17304.21, stdev=19957.72
00:35:56.574      clat percentiles (usec):
00:35:56.574       |  1.00th=[  367],  5.00th=[  375], 10.00th=[  379], 20.00th=[  388],
00:35:56.574       | 30.00th=[  396], 40.00th=[  412], 50.00th=[  445], 60.00th=[40633],
00:35:56.574       | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157],
00:35:56.574       | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681],
00:35:56.574       | 99.99th=[41681]
00:35:56.574     bw (  KiB/s): min=  608, max= 1312, per=45.73%, avg=923.20, stdev=210.29, samples=20
00:35:56.574     iops        : min=  152, max=  328, avg=230.80, stdev=52.57, samples=20
00:35:56.574    lat (usec)   : 500=53.98%, 750=3.55%, 1000=0.61%
00:35:56.574    lat (msec)   : 2=0.17%, 50=41.70%
00:35:56.574    cpu          : usr=95.85%, sys=3.75%, ctx=21, majf=0, minf=0
00:35:56.574    IO depths    : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:35:56.574       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:35:56.574       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:35:56.574       issued rwts: total=2312,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:35:56.574       latency   : target=0, window=0, percentile=100.00%, depth=4
00:35:56.574  filename1: (groupid=0, jobs=1): err= 0: pid=130224: Fri Dec 13 19:20:27 2024
00:35:56.574    read: IOPS=273, BW=1095KiB/s (1121kB/s)(10.7MiB/10012msec)
00:35:56.574      slat (nsec): min=6006, max=33516, avg=7444.15, stdev=2513.13
00:35:56.574      clat (usec): min=361, max=41652, avg=14593.18, stdev=19304.85
00:35:56.574       lat (usec): min=368, max=41664, avg=14600.63, stdev=19304.93
00:35:56.574      clat percentiles (usec):
00:35:56.574       |  1.00th=[  367],  5.00th=[  375], 10.00th=[  379], 20.00th=[  388],
00:35:56.574       | 30.00th=[  396], 40.00th=[  404], 50.00th=[  424], 60.00th=[  465],
00:35:56.574       | 70.00th=[40633], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157],
00:35:56.574       | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681],
00:35:56.574       | 99.99th=[41681]
00:35:56.574     bw (  KiB/s): min=  800, max= 1568, per=54.20%, avg=1094.40, stdev=228.08, samples=20
00:35:56.574     iops        : min=  200, max=  392, avg=273.60, stdev=57.02, samples=20
00:35:56.574    lat (usec)   : 500=61.68%, 750=2.85%, 1000=0.29%
00:35:56.574    lat (msec)   : 2=0.15%, 50=35.04%
00:35:56.574    cpu          : usr=95.89%, sys=3.70%, ctx=33, majf=0, minf=0
00:35:56.574    IO depths    : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:35:56.574       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:35:56.574       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:35:56.574       issued rwts: total=2740,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:35:56.574       latency   : target=0, window=0, percentile=100.00%, depth=4
00:35:56.574  
00:35:56.574  Run status group 0 (all jobs):
00:35:56.574     READ: bw=2018KiB/s (2067kB/s), 924KiB/s-1095KiB/s (946kB/s-1121kB/s), io=19.7MiB (20.7MB), run=10011-10012msec
00:35:56.574   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1
00:35:56.574   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub
00:35:56.574   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@"
00:35:56.574   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0
00:35:56.575   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0
00:35:56.575   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:35:56.575   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:56.575   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:35:56.575   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:56.575   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:35:56.575   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:56.575   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:35:56.575   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:56.575   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@"
00:35:56.575   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1
00:35:56.575   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1
00:35:56.575   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:35:56.575   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:56.575   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:35:56.575   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:56.575   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1
00:35:56.575   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:56.575   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:35:56.575  ************************************
00:35:56.575  END TEST fio_dif_1_multi_subsystems
00:35:56.575  ************************************
00:35:56.575   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:56.575  
00:35:56.575  real	0m11.339s
00:35:56.575  user	0m20.118s
00:35:56.575  sys	0m1.054s
00:35:56.575   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable
00:35:56.575   19:20:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:35:56.575   19:20:28 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params
00:35:56.575   19:20:28 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:35:56.575   19:20:28 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable
00:35:56.575   19:20:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:35:56.575  ************************************
00:35:56.575  START TEST fio_dif_rand_params
00:35:56.575  ************************************
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@"
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:35:56.575  bdev_null0
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:56.575   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:35:56.834   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:56.834   19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420
00:35:56.834   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:56.834   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:35:56.834  [2024-12-13 19:20:28.407205] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:35:56.834   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:56.834   19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62
00:35:56.834    19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0
00:35:56.834    19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0
00:35:56.834    19:20:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=()
00:35:56.834    19:20:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config
00:35:56.834    19:20:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:35:56.834    19:20:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:35:56.834  {
00:35:56.834    "params": {
00:35:56.834      "name": "Nvme$subsystem",
00:35:56.834      "trtype": "$TEST_TRANSPORT",
00:35:56.834      "traddr": "$NVMF_FIRST_TARGET_IP",
00:35:56.834      "adrfam": "ipv4",
00:35:56.834      "trsvcid": "$NVMF_PORT",
00:35:56.834      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:35:56.834      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:35:56.834      "hdgst": ${hdgst:-false},
00:35:56.834      "ddgst": ${ddgst:-false}
00:35:56.834    },
00:35:56.834    "method": "bdev_nvme_attach_controller"
00:35:56.834  }
00:35:56.834  EOF
00:35:56.834  )")
00:35:56.834   19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:35:56.834    19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf
00:35:56.834   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:35:56.834    19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file
00:35:56.834   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:35:56.834    19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat
00:35:56.834   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:35:56.834     19:20:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat
00:35:56.834   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers
00:35:56.834   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:35:56.834   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift
00:35:56.834   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib=
00:35:56.834   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:35:56.834    19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 ))
00:35:56.834    19:20:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files ))
00:35:56.834    19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:35:56.834    19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan
00:35:56.834    19:20:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq .
00:35:56.834    19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:35:56.834     19:20:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=,
00:35:56.834     19:20:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:35:56.834    "params": {
00:35:56.834      "name": "Nvme0",
00:35:56.834      "trtype": "tcp",
00:35:56.834      "traddr": "10.0.0.3",
00:35:56.834      "adrfam": "ipv4",
00:35:56.834      "trsvcid": "4420",
00:35:56.834      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:35:56.834      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:35:56.834      "hdgst": false,
00:35:56.834      "ddgst": false
00:35:56.834    },
00:35:56.834    "method": "bdev_nvme_attach_controller"
00:35:56.834  }'
00:35:56.834   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=
00:35:56.834   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:35:56.834   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:35:56.834    19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:35:56.834    19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:35:56.834    19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:35:56.834   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=
00:35:56.834   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:35:56.834   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:35:56.834   19:20:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:35:56.834  filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3
00:35:56.834  ...
00:35:56.834  fio-3.35
00:35:56.834  Starting 3 threads
00:36:03.399  
00:36:03.399  filename0: (groupid=0, jobs=1): err= 0: pid=130375: Fri Dec 13 19:20:34 2024
00:36:03.399    read: IOPS=249, BW=31.2MiB/s (32.7MB/s)(156MiB/5005msec)
00:36:03.399      slat (nsec): min=6011, max=58505, avg=12899.55, stdev=6167.17
00:36:03.399      clat (usec): min=3663, max=52029, avg=12009.67, stdev=11853.17
00:36:03.399       lat (usec): min=3672, max=52048, avg=12022.57, stdev=11852.88
00:36:03.399      clat percentiles (usec):
00:36:03.399       |  1.00th=[ 3982],  5.00th=[ 6128], 10.00th=[ 6390], 20.00th=[ 6849],
00:36:03.399       | 30.00th=[ 7373], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9241],
00:36:03.399       | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[11207], 95.00th=[49021],
00:36:03.399       | 99.00th=[50594], 99.50th=[51119], 99.90th=[51119], 99.95th=[52167],
00:36:03.399       | 99.99th=[52167]
00:36:03.399     bw (  KiB/s): min=25600, max=39936, per=30.19%, avg=31800.89, stdev=5515.56, samples=9
00:36:03.399     iops        : min=  200, max=  312, avg=248.44, stdev=43.09, samples=9
00:36:03.399    lat (msec)   : 4=1.04%, 10=80.53%, 20=9.29%, 50=6.09%, 100=3.04%
00:36:03.399    cpu          : usr=93.67%, sys=4.90%, ctx=9, majf=0, minf=0
00:36:03.399    IO depths    : 1=3.8%, 2=96.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:36:03.399       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:03.399       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:03.399       issued rwts: total=1248,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:03.399       latency   : target=0, window=0, percentile=100.00%, depth=3
00:36:03.399  filename0: (groupid=0, jobs=1): err= 0: pid=130376: Fri Dec 13 19:20:34 2024
00:36:03.399    read: IOPS=230, BW=28.9MiB/s (30.3MB/s)(144MiB/5003msec)
00:36:03.399      slat (nsec): min=3900, max=86028, avg=13330.70, stdev=7130.07
00:36:03.399      clat (usec): min=4931, max=53515, avg=12973.44, stdev=12072.77
00:36:03.399       lat (usec): min=4949, max=53527, avg=12986.77, stdev=12072.69
00:36:03.399      clat percentiles (usec):
00:36:03.399       |  1.00th=[ 5538],  5.00th=[ 6259], 10.00th=[ 6521], 20.00th=[ 6915],
00:36:03.399       | 30.00th=[ 7308], 40.00th=[ 9110], 50.00th=[10028], 60.00th=[10421],
00:36:03.399       | 70.00th=[10814], 80.00th=[11469], 90.00th=[12780], 95.00th=[49546],
00:36:03.400       | 99.00th=[52167], 99.50th=[52691], 99.90th=[53216], 99.95th=[53740],
00:36:03.400       | 99.99th=[53740]
00:36:03.400     bw (  KiB/s): min=20736, max=40448, per=28.12%, avg=29610.67, stdev=6884.69, samples=9
00:36:03.400     iops        : min=  162, max=  316, avg=231.33, stdev=53.79, samples=9
00:36:03.400    lat (msec)   : 10=48.66%, 20=41.73%, 50=4.76%, 100=4.85%
00:36:03.400    cpu          : usr=93.70%, sys=4.90%, ctx=5, majf=0, minf=0
00:36:03.400    IO depths    : 1=4.1%, 2=95.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:36:03.400       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:03.400       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:03.400       issued rwts: total=1155,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:03.400       latency   : target=0, window=0, percentile=100.00%, depth=3
00:36:03.400  filename0: (groupid=0, jobs=1): err= 0: pid=130377: Fri Dec 13 19:20:34 2024
00:36:03.400    read: IOPS=342, BW=42.8MiB/s (44.9MB/s)(214MiB/5003msec)
00:36:03.400      slat (usec): min=5, max=133, avg=14.39, stdev= 7.76
00:36:03.400      clat (usec): min=3101, max=51604, avg=8729.45, stdev=4113.28
00:36:03.400       lat (usec): min=3118, max=51610, avg=8743.84, stdev=4114.69
00:36:03.400      clat percentiles (usec):
00:36:03.400       |  1.00th=[ 3458],  5.00th=[ 3556], 10.00th=[ 3654], 20.00th=[ 4359],
00:36:03.400       | 30.00th=[ 7439], 40.00th=[ 7767], 50.00th=[ 8160], 60.00th=[ 8848],
00:36:03.400       | 70.00th=[11207], 80.00th=[12256], 90.00th=[13042], 95.00th=[13566],
00:36:03.400       | 99.00th=[14484], 99.50th=[15008], 99.90th=[50070], 99.95th=[51643],
00:36:03.400       | 99.99th=[51643]
00:36:03.400     bw (  KiB/s): min=30720, max=51456, per=41.73%, avg=43946.67, stdev=6838.12, samples=9
00:36:03.400     iops        : min=  240, max=  402, avg=343.33, stdev=53.42, samples=9
00:36:03.400    lat (msec)   : 4=18.83%, 10=46.36%, 20=34.46%, 50=0.17%, 100=0.17%
00:36:03.400    cpu          : usr=93.88%, sys=4.40%, ctx=7, majf=0, minf=0
00:36:03.400    IO depths    : 1=9.3%, 2=90.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:36:03.400       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:03.400       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:03.400       issued rwts: total=1715,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:03.400       latency   : target=0, window=0, percentile=100.00%, depth=3
00:36:03.400  
00:36:03.400  Run status group 0 (all jobs):
00:36:03.400     READ: bw=103MiB/s (108MB/s), 28.9MiB/s-42.8MiB/s (30.3MB/s-44.9MB/s), io=515MiB (540MB), run=5003-5005msec
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@"
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime=
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@"
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:03.400  bdev_null0
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:03.400  [2024-12-13 19:20:34.591744] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@"
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:03.400  bdev_null1
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@"
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:03.400  bdev_null2
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62
00:36:03.400    19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2
00:36:03.400    19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2
00:36:03.400    19:20:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=()
00:36:03.400    19:20:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config
00:36:03.400    19:20:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:36:03.400    19:20:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:36:03.400  {
00:36:03.400    "params": {
00:36:03.400      "name": "Nvme$subsystem",
00:36:03.400      "trtype": "$TEST_TRANSPORT",
00:36:03.400      "traddr": "$NVMF_FIRST_TARGET_IP",
00:36:03.400      "adrfam": "ipv4",
00:36:03.400      "trsvcid": "$NVMF_PORT",
00:36:03.400      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:36:03.400      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:36:03.400      "hdgst": ${hdgst:-false},
00:36:03.400      "ddgst": ${ddgst:-false}
00:36:03.400    },
00:36:03.400    "method": "bdev_nvme_attach_controller"
00:36:03.400  }
00:36:03.400  EOF
00:36:03.400  )")
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:36:03.400    19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:36:03.400    19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:36:03.400    19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib=
00:36:03.400     19:20:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat
00:36:03.400   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:36:03.400    19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:36:03.400    19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 ))
00:36:03.400    19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files ))
00:36:03.400    19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:36:03.400    19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan
00:36:03.400    19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat
00:36:03.400    19:20:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:36:03.400    19:20:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:36:03.400  {
00:36:03.400    "params": {
00:36:03.400      "name": "Nvme$subsystem",
00:36:03.400      "trtype": "$TEST_TRANSPORT",
00:36:03.400      "traddr": "$NVMF_FIRST_TARGET_IP",
00:36:03.400      "adrfam": "ipv4",
00:36:03.400      "trsvcid": "$NVMF_PORT",
00:36:03.400      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:36:03.400      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:36:03.401      "hdgst": ${hdgst:-false},
00:36:03.401      "ddgst": ${ddgst:-false}
00:36:03.401    },
00:36:03.401    "method": "bdev_nvme_attach_controller"
00:36:03.401  }
00:36:03.401  EOF
00:36:03.401  )")
00:36:03.401     19:20:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat
00:36:03.401    19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ ))
00:36:03.401    19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files ))
00:36:03.401    19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat
00:36:03.401    19:20:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:36:03.401    19:20:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:36:03.401  {
00:36:03.401    "params": {
00:36:03.401      "name": "Nvme$subsystem",
00:36:03.401      "trtype": "$TEST_TRANSPORT",
00:36:03.401      "traddr": "$NVMF_FIRST_TARGET_IP",
00:36:03.401      "adrfam": "ipv4",
00:36:03.401      "trsvcid": "$NVMF_PORT",
00:36:03.401      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:36:03.401      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:36:03.401      "hdgst": ${hdgst:-false},
00:36:03.401      "ddgst": ${ddgst:-false}
00:36:03.401    },
00:36:03.401    "method": "bdev_nvme_attach_controller"
00:36:03.401  }
00:36:03.401  EOF
00:36:03.401  )")
00:36:03.401    19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ ))
00:36:03.401     19:20:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat
00:36:03.401    19:20:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files ))
00:36:03.401    19:20:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq .
00:36:03.401     19:20:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=,
00:36:03.401     19:20:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:36:03.401    "params": {
00:36:03.401      "name": "Nvme0",
00:36:03.401      "trtype": "tcp",
00:36:03.401      "traddr": "10.0.0.3",
00:36:03.401      "adrfam": "ipv4",
00:36:03.401      "trsvcid": "4420",
00:36:03.401      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:36:03.401      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:36:03.401      "hdgst": false,
00:36:03.401      "ddgst": false
00:36:03.401    },
00:36:03.401    "method": "bdev_nvme_attach_controller"
00:36:03.401  },{
00:36:03.401    "params": {
00:36:03.401      "name": "Nvme1",
00:36:03.401      "trtype": "tcp",
00:36:03.401      "traddr": "10.0.0.3",
00:36:03.401      "adrfam": "ipv4",
00:36:03.401      "trsvcid": "4420",
00:36:03.401      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:36:03.401      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:36:03.401      "hdgst": false,
00:36:03.401      "ddgst": false
00:36:03.401    },
00:36:03.401    "method": "bdev_nvme_attach_controller"
00:36:03.401  },{
00:36:03.401    "params": {
00:36:03.401      "name": "Nvme2",
00:36:03.401      "trtype": "tcp",
00:36:03.401      "traddr": "10.0.0.3",
00:36:03.401      "adrfam": "ipv4",
00:36:03.401      "trsvcid": "4420",
00:36:03.401      "subnqn": "nqn.2016-06.io.spdk:cnode2",
00:36:03.401      "hostnqn": "nqn.2016-06.io.spdk:host2",
00:36:03.401      "hdgst": false,
00:36:03.401      "ddgst": false
00:36:03.401    },
00:36:03.401    "method": "bdev_nvme_attach_controller"
00:36:03.401  }'
00:36:03.401   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=
00:36:03.401   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:36:03.401   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:36:03.401    19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:36:03.401    19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:36:03.401    19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:36:03.401   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=
00:36:03.401   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:36:03.401   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:36:03.401   19:20:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:36:03.401  filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16
00:36:03.401  ...
00:36:03.401  filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16
00:36:03.401  ...
00:36:03.401  filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16
00:36:03.401  ...
00:36:03.401  fio-3.35
00:36:03.401  Starting 24 threads
00:36:15.606  
00:36:15.606  filename0: (groupid=0, jobs=1): err= 0: pid=130472: Fri Dec 13 19:20:45 2024
00:36:15.606    read: IOPS=228, BW=916KiB/s (938kB/s)(9168KiB/10012msec)
00:36:15.606      slat (usec): min=5, max=9097, avg=27.95, stdev=346.41
00:36:15.606      clat (msec): min=20, max=155, avg=69.63, stdev=19.40
00:36:15.606       lat (msec): min=20, max=155, avg=69.66, stdev=19.40
00:36:15.606      clat percentiles (msec):
00:36:15.606       |  1.00th=[   26],  5.00th=[   40], 10.00th=[   48], 20.00th=[   57],
00:36:15.606       | 30.00th=[   61], 40.00th=[   62], 50.00th=[   68], 60.00th=[   72],
00:36:15.606       | 70.00th=[   81], 80.00th=[   85], 90.00th=[   96], 95.00th=[  105],
00:36:15.606       | 99.00th=[  117], 99.50th=[  123], 99.90th=[  157], 99.95th=[  157],
00:36:15.606       | 99.99th=[  157]
00:36:15.606     bw (  KiB/s): min=  640, max= 1120, per=3.86%, avg=917.74, stdev=128.56, samples=19
00:36:15.606     iops        : min=  160, max=  280, avg=229.42, stdev=32.13, samples=19
00:36:15.606    lat (msec)   : 50=16.06%, 100=78.14%, 250=5.80%
00:36:15.606    cpu          : usr=32.63%, sys=0.56%, ctx=871, majf=0, minf=9
00:36:15.606    IO depths    : 1=2.2%, 2=5.1%, 4=14.0%, 8=67.4%, 16=11.3%, 32=0.0%, >=64=0.0%
00:36:15.606       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.606       complete  : 0=0.0%, 4=91.3%, 8=4.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.606       issued rwts: total=2292,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.606       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.606  filename0: (groupid=0, jobs=1): err= 0: pid=130473: Fri Dec 13 19:20:45 2024
00:36:15.606    read: IOPS=222, BW=892KiB/s (913kB/s)(8920KiB/10003msec)
00:36:15.606      slat (usec): min=5, max=8032, avg=16.17, stdev=170.02
00:36:15.606      clat (msec): min=4, max=149, avg=71.67, stdev=21.01
00:36:15.606       lat (msec): min=4, max=149, avg=71.68, stdev=21.02
00:36:15.606      clat percentiles (msec):
00:36:15.606       |  1.00th=[    9],  5.00th=[   43], 10.00th=[   49], 20.00th=[   59],
00:36:15.606       | 30.00th=[   61], 40.00th=[   64], 50.00th=[   70], 60.00th=[   74],
00:36:15.606       | 70.00th=[   83], 80.00th=[   87], 90.00th=[   96], 95.00th=[  108],
00:36:15.606       | 99.00th=[  126], 99.50th=[  140], 99.90th=[  150], 99.95th=[  150],
00:36:15.606       | 99.99th=[  150]
00:36:15.606     bw (  KiB/s): min=  640, max= 1072, per=3.68%, avg=875.89, stdev=112.72, samples=19
00:36:15.606     iops        : min=  160, max=  268, avg=218.95, stdev=28.18, samples=19
00:36:15.606    lat (msec)   : 10=1.43%, 20=0.72%, 50=10.09%, 100=79.42%, 250=8.34%
00:36:15.606    cpu          : usr=32.76%, sys=0.58%, ctx=951, majf=0, minf=9
00:36:15.606    IO depths    : 1=2.6%, 2=5.8%, 4=15.2%, 8=65.8%, 16=10.5%, 32=0.0%, >=64=0.0%
00:36:15.606       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.606       complete  : 0=0.0%, 4=91.4%, 8=3.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.606       issued rwts: total=2230,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.606       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.606  filename0: (groupid=0, jobs=1): err= 0: pid=130474: Fri Dec 13 19:20:45 2024
00:36:15.606    read: IOPS=281, BW=1127KiB/s (1154kB/s)(11.0MiB/10019msec)
00:36:15.606      slat (usec): min=5, max=4048, avg=14.84, stdev=125.49
00:36:15.606      clat (msec): min=26, max=126, avg=56.69, stdev=18.29
00:36:15.606       lat (msec): min=26, max=126, avg=56.71, stdev=18.29
00:36:15.606      clat percentiles (msec):
00:36:15.606       |  1.00th=[   29],  5.00th=[   34], 10.00th=[   38], 20.00th=[   41],
00:36:15.606       | 30.00th=[   44], 40.00th=[   48], 50.00th=[   54], 60.00th=[   58],
00:36:15.606       | 70.00th=[   65], 80.00th=[   72], 90.00th=[   84], 95.00th=[   94],
00:36:15.606       | 99.00th=[  103], 99.50th=[  114], 99.90th=[  127], 99.95th=[  127],
00:36:15.606       | 99.99th=[  127]
00:36:15.606     bw (  KiB/s): min=  720, max= 1456, per=4.73%, avg=1124.20, stdev=212.96, samples=20
00:36:15.606     iops        : min=  180, max=  364, avg=281.00, stdev=53.19, samples=20
00:36:15.606    lat (msec)   : 50=45.71%, 100=53.01%, 250=1.28%
00:36:15.606    cpu          : usr=49.60%, sys=0.77%, ctx=1268, majf=0, minf=9
00:36:15.606    IO depths    : 1=0.6%, 2=1.3%, 4=7.4%, 8=77.6%, 16=13.0%, 32=0.0%, >=64=0.0%
00:36:15.606       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.606       complete  : 0=0.0%, 4=89.3%, 8=6.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.606       issued rwts: total=2822,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.606       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.606  filename0: (groupid=0, jobs=1): err= 0: pid=130475: Fri Dec 13 19:20:45 2024
00:36:15.606    read: IOPS=229, BW=919KiB/s (941kB/s)(9188KiB/10001msec)
00:36:15.606      slat (usec): min=5, max=8020, avg=27.60, stdev=315.76
00:36:15.606      clat (msec): min=3, max=162, avg=69.53, stdev=20.35
00:36:15.606       lat (msec): min=3, max=162, avg=69.56, stdev=20.37
00:36:15.606      clat percentiles (msec):
00:36:15.606       |  1.00th=[    8],  5.00th=[   44], 10.00th=[   48], 20.00th=[   59],
00:36:15.606       | 30.00th=[   61], 40.00th=[   64], 50.00th=[   67], 60.00th=[   71],
00:36:15.606       | 70.00th=[   79], 80.00th=[   84], 90.00th=[   95], 95.00th=[  104],
00:36:15.606       | 99.00th=[  134], 99.50th=[  138], 99.90th=[  163], 99.95th=[  163],
00:36:15.606       | 99.99th=[  163]
00:36:15.606     bw (  KiB/s): min=  528, max= 1024, per=3.81%, avg=906.42, stdev=125.68, samples=19
00:36:15.606     iops        : min=  132, max=  256, avg=226.58, stdev=31.43, samples=19
00:36:15.606    lat (msec)   : 4=0.22%, 10=1.18%, 20=0.70%, 50=10.45%, 100=80.98%
00:36:15.606    lat (msec)   : 250=6.49%
00:36:15.606    cpu          : usr=40.58%, sys=0.53%, ctx=1256, majf=0, minf=9
00:36:15.606    IO depths    : 1=2.4%, 2=5.5%, 4=15.5%, 8=65.9%, 16=10.8%, 32=0.0%, >=64=0.0%
00:36:15.606       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.606       complete  : 0=0.0%, 4=91.6%, 8=3.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.606       issued rwts: total=2297,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.606       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.606  filename0: (groupid=0, jobs=1): err= 0: pid=130476: Fri Dec 13 19:20:45 2024
00:36:15.606    read: IOPS=224, BW=899KiB/s (921kB/s)(8996KiB/10007msec)
00:36:15.606      slat (usec): min=5, max=8021, avg=29.31, stdev=347.91
00:36:15.606      clat (msec): min=22, max=162, avg=71.00, stdev=18.93
00:36:15.606       lat (msec): min=22, max=162, avg=71.03, stdev=18.94
00:36:15.606      clat percentiles (msec):
00:36:15.606       |  1.00th=[   35],  5.00th=[   46], 10.00th=[   52], 20.00th=[   59],
00:36:15.606       | 30.00th=[   61], 40.00th=[   64], 50.00th=[   67], 60.00th=[   71],
00:36:15.606       | 70.00th=[   81], 80.00th=[   85], 90.00th=[   95], 95.00th=[  106],
00:36:15.606       | 99.00th=[  130], 99.50th=[  140], 99.90th=[  163], 99.95th=[  163],
00:36:15.606       | 99.99th=[  163]
00:36:15.606     bw (  KiB/s): min=  568, max= 1312, per=3.75%, avg=892.89, stdev=159.36, samples=19
00:36:15.606     iops        : min=  142, max=  328, avg=223.21, stdev=39.83, samples=19
00:36:15.606    lat (msec)   : 50=8.71%, 100=85.24%, 250=6.05%
00:36:15.606    cpu          : usr=37.09%, sys=0.52%, ctx=993, majf=0, minf=9
00:36:15.606    IO depths    : 1=2.6%, 2=5.9%, 4=16.2%, 8=65.0%, 16=10.4%, 32=0.0%, >=64=0.0%
00:36:15.607       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.607       complete  : 0=0.0%, 4=91.7%, 8=3.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.607       issued rwts: total=2249,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.607       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.607  filename0: (groupid=0, jobs=1): err= 0: pid=130477: Fri Dec 13 19:20:45 2024
00:36:15.607    read: IOPS=225, BW=903KiB/s (925kB/s)(9036KiB/10002msec)
00:36:15.607      slat (usec): min=3, max=8689, avg=18.76, stdev=248.54
00:36:15.607      clat (msec): min=2, max=174, avg=70.72, stdev=24.29
00:36:15.607       lat (msec): min=2, max=174, avg=70.73, stdev=24.29
00:36:15.607      clat percentiles (msec):
00:36:15.607       |  1.00th=[    3],  5.00th=[   38], 10.00th=[   46], 20.00th=[   58],
00:36:15.607       | 30.00th=[   61], 40.00th=[   62], 50.00th=[   69], 60.00th=[   74],
00:36:15.607       | 70.00th=[   83], 80.00th=[   90], 90.00th=[  101], 95.00th=[  110],
00:36:15.607       | 99.00th=[  140], 99.50th=[  144], 99.90th=[  176], 99.95th=[  176],
00:36:15.607       | 99.99th=[  176]
00:36:15.607     bw (  KiB/s): min=  512, max= 1024, per=3.66%, avg=870.37, stdev=134.63, samples=19
00:36:15.607     iops        : min=  128, max=  256, avg=217.58, stdev=33.66, samples=19
00:36:15.607    lat (msec)   : 4=2.12%, 10=1.42%, 50=10.36%, 100=76.72%, 250=9.38%
00:36:15.607    cpu          : usr=34.47%, sys=0.50%, ctx=997, majf=0, minf=9
00:36:15.607    IO depths    : 1=2.6%, 2=5.9%, 4=15.9%, 8=65.2%, 16=10.3%, 32=0.0%, >=64=0.0%
00:36:15.607       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.607       complete  : 0=0.0%, 4=91.7%, 8=3.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.607       issued rwts: total=2259,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.607       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.607  filename0: (groupid=0, jobs=1): err= 0: pid=130478: Fri Dec 13 19:20:45 2024
00:36:15.607    read: IOPS=270, BW=1083KiB/s (1109kB/s)(10.6MiB/10021msec)
00:36:15.607      slat (usec): min=3, max=4018, avg=14.03, stdev=98.60
00:36:15.607      clat (msec): min=13, max=129, avg=58.96, stdev=19.03
00:36:15.607       lat (msec): min=13, max=129, avg=58.98, stdev=19.03
00:36:15.607      clat percentiles (msec):
00:36:15.607       |  1.00th=[   22],  5.00th=[   34], 10.00th=[   38], 20.00th=[   42],
00:36:15.607       | 30.00th=[   46], 40.00th=[   52], 50.00th=[   59], 60.00th=[   62],
00:36:15.607       | 70.00th=[   67], 80.00th=[   74], 90.00th=[   86], 95.00th=[   93],
00:36:15.607       | 99.00th=[  108], 99.50th=[  118], 99.90th=[  130], 99.95th=[  130],
00:36:15.607       | 99.99th=[  130]
00:36:15.607     bw (  KiB/s): min=  728, max= 1336, per=4.55%, avg=1081.75, stdev=190.34, samples=20
00:36:15.607     iops        : min=  182, max=  334, avg=270.40, stdev=47.60, samples=20
00:36:15.607    lat (msec)   : 20=0.59%, 50=37.71%, 100=59.09%, 250=2.62%
00:36:15.607    cpu          : usr=40.94%, sys=0.68%, ctx=1267, majf=0, minf=9
00:36:15.607    IO depths    : 1=1.5%, 2=3.1%, 4=11.1%, 8=72.4%, 16=12.0%, 32=0.0%, >=64=0.0%
00:36:15.607       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.607       complete  : 0=0.0%, 4=90.4%, 8=4.9%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.607       issued rwts: total=2713,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.607       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.607  filename0: (groupid=0, jobs=1): err= 0: pid=130479: Fri Dec 13 19:20:45 2024
00:36:15.607    read: IOPS=225, BW=904KiB/s (925kB/s)(9040KiB/10003msec)
00:36:15.607      slat (usec): min=3, max=9028, avg=23.43, stdev=304.58
00:36:15.607      clat (msec): min=2, max=130, avg=70.64, stdev=22.10
00:36:15.607       lat (msec): min=2, max=130, avg=70.67, stdev=22.09
00:36:15.607      clat percentiles (msec):
00:36:15.607       |  1.00th=[    3],  5.00th=[   35], 10.00th=[   46], 20.00th=[   58],
00:36:15.607       | 30.00th=[   62], 40.00th=[   67], 50.00th=[   71], 60.00th=[   75],
00:36:15.607       | 70.00th=[   83], 80.00th=[   88], 90.00th=[   95], 95.00th=[  103],
00:36:15.607       | 99.00th=[  126], 99.50th=[  129], 99.90th=[  129], 99.95th=[  131],
00:36:15.607       | 99.99th=[  131]
00:36:15.607     bw (  KiB/s): min=  640, max= 1072, per=3.63%, avg=864.00, stdev=114.20, samples=19
00:36:15.607     iops        : min=  160, max=  268, avg=216.00, stdev=28.55, samples=19
00:36:15.607    lat (msec)   : 4=2.12%, 10=1.42%, 20=0.71%, 50=8.85%, 100=81.77%
00:36:15.607    lat (msec)   : 250=5.13%
00:36:15.607    cpu          : usr=36.00%, sys=0.53%, ctx=1049, majf=0, minf=9
00:36:15.607    IO depths    : 1=2.3%, 2=5.2%, 4=14.7%, 8=66.5%, 16=11.2%, 32=0.0%, >=64=0.0%
00:36:15.607       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.607       complete  : 0=0.0%, 4=91.2%, 8=4.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.607       issued rwts: total=2260,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.607       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.607  filename1: (groupid=0, jobs=1): err= 0: pid=130480: Fri Dec 13 19:20:45 2024
00:36:15.607    read: IOPS=273, BW=1095KiB/s (1122kB/s)(10.7MiB/10035msec)
00:36:15.607      slat (usec): min=3, max=8026, avg=18.38, stdev=229.33
00:36:15.607      clat (msec): min=12, max=109, avg=58.29, stdev=18.57
00:36:15.607       lat (msec): min=12, max=109, avg=58.30, stdev=18.57
00:36:15.607      clat percentiles (msec):
00:36:15.607       |  1.00th=[   16],  5.00th=[   34], 10.00th=[   36], 20.00th=[   41],
00:36:15.607       | 30.00th=[   48], 40.00th=[   53], 50.00th=[   59], 60.00th=[   61],
00:36:15.607       | 70.00th=[   69], 80.00th=[   73], 90.00th=[   84], 95.00th=[   91],
00:36:15.607       | 99.00th=[  106], 99.50th=[  107], 99.90th=[  110], 99.95th=[  110],
00:36:15.607       | 99.99th=[  110]
00:36:15.607     bw (  KiB/s): min=  864, max= 1504, per=4.59%, avg=1092.65, stdev=183.07, samples=20
00:36:15.607     iops        : min=  216, max=  376, avg=273.15, stdev=45.74, samples=20
00:36:15.607    lat (msec)   : 20=1.75%, 50=35.74%, 100=60.48%, 250=2.04%
00:36:15.607    cpu          : usr=35.88%, sys=0.56%, ctx=1127, majf=0, minf=9
00:36:15.607    IO depths    : 1=0.8%, 2=1.9%, 4=7.8%, 8=76.3%, 16=13.2%, 32=0.0%, >=64=0.0%
00:36:15.607       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.607       complete  : 0=0.0%, 4=89.7%, 8=6.3%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.607       issued rwts: total=2748,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.607       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.607  filename1: (groupid=0, jobs=1): err= 0: pid=130481: Fri Dec 13 19:20:45 2024
00:36:15.607    read: IOPS=258, BW=1033KiB/s (1057kB/s)(10.1MiB/10049msec)
00:36:15.607      slat (usec): min=6, max=8018, avg=15.39, stdev=157.37
00:36:15.607      clat (msec): min=5, max=145, avg=61.80, stdev=20.51
00:36:15.607       lat (msec): min=5, max=146, avg=61.81, stdev=20.51
00:36:15.607      clat percentiles (msec):
00:36:15.607       |  1.00th=[    6],  5.00th=[   33], 10.00th=[   40], 20.00th=[   47],
00:36:15.607       | 30.00th=[   50], 40.00th=[   59], 50.00th=[   61], 60.00th=[   64],
00:36:15.607       | 70.00th=[   70], 80.00th=[   74], 90.00th=[   86], 95.00th=[   99],
00:36:15.607       | 99.00th=[  118], 99.50th=[  123], 99.90th=[  146], 99.95th=[  146],
00:36:15.607       | 99.99th=[  146]
00:36:15.607     bw (  KiB/s): min=  640, max= 1654, per=4.33%, avg=1030.70, stdev=232.32, samples=20
00:36:15.607     iops        : min=  160, max=  413, avg=257.65, stdev=58.01, samples=20
00:36:15.607    lat (msec)   : 10=1.85%, 20=0.62%, 50=28.33%, 100=64.53%, 250=4.66%
00:36:15.607    cpu          : usr=34.82%, sys=0.50%, ctx=901, majf=0, minf=9
00:36:15.607    IO depths    : 1=2.0%, 2=4.3%, 4=13.5%, 8=69.1%, 16=11.1%, 32=0.0%, >=64=0.0%
00:36:15.607       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.607       complete  : 0=0.0%, 4=90.7%, 8=4.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.607       issued rwts: total=2594,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.607       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.607  filename1: (groupid=0, jobs=1): err= 0: pid=130482: Fri Dec 13 19:20:45 2024
00:36:15.607    read: IOPS=255, BW=1021KiB/s (1045kB/s)(9.99MiB/10022msec)
00:36:15.607      slat (usec): min=3, max=4030, avg=15.15, stdev=112.58
00:36:15.607      clat (msec): min=20, max=154, avg=62.55, stdev=18.75
00:36:15.607       lat (msec): min=20, max=154, avg=62.57, stdev=18.75
00:36:15.607      clat percentiles (msec):
00:36:15.607       |  1.00th=[   25],  5.00th=[   37], 10.00th=[   41], 20.00th=[   47],
00:36:15.607       | 30.00th=[   54], 40.00th=[   58], 50.00th=[   62], 60.00th=[   64],
00:36:15.607       | 70.00th=[   70], 80.00th=[   77], 90.00th=[   87], 95.00th=[   96],
00:36:15.607       | 99.00th=[  120], 99.50th=[  123], 99.90th=[  155], 99.95th=[  155],
00:36:15.607       | 99.99th=[  155]
00:36:15.607     bw (  KiB/s): min=  728, max= 1200, per=4.28%, avg=1018.95, stdev=131.86, samples=20
00:36:15.607     iops        : min=  182, max=  300, avg=254.70, stdev=32.94, samples=20
00:36:15.607    lat (msec)   : 50=26.94%, 100=70.05%, 250=3.01%
00:36:15.607    cpu          : usr=41.89%, sys=0.79%, ctx=1287, majf=0, minf=9
00:36:15.607    IO depths    : 1=1.3%, 2=2.9%, 4=10.0%, 8=73.1%, 16=12.8%, 32=0.0%, >=64=0.0%
00:36:15.607       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.607       complete  : 0=0.0%, 4=90.2%, 8=5.6%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.607       issued rwts: total=2558,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.607       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.607  filename1: (groupid=0, jobs=1): err= 0: pid=130483: Fri Dec 13 19:20:45 2024
00:36:15.607    read: IOPS=243, BW=975KiB/s (998kB/s)(9748KiB/10002msec)
00:36:15.607      slat (nsec): min=5191, max=57372, avg=12220.00, stdev=7694.16
00:36:15.607      clat (msec): min=24, max=147, avg=65.57, stdev=21.26
00:36:15.607       lat (msec): min=24, max=147, avg=65.58, stdev=21.26
00:36:15.607      clat percentiles (msec):
00:36:15.607       |  1.00th=[   31],  5.00th=[   37], 10.00th=[   42], 20.00th=[   48],
00:36:15.607       | 30.00th=[   56], 40.00th=[   60], 50.00th=[   63], 60.00th=[   66],
00:36:15.607       | 70.00th=[   72], 80.00th=[   80], 90.00th=[   91], 95.00th=[  112],
00:36:15.607       | 99.00th=[  132], 99.50th=[  134], 99.90th=[  148], 99.95th=[  148],
00:36:15.607       | 99.99th=[  148]
00:36:15.607     bw (  KiB/s): min=  560, max= 1168, per=4.13%, avg=983.26, stdev=182.51, samples=19
00:36:15.607     iops        : min=  140, max=  292, avg=245.79, stdev=45.60, samples=19
00:36:15.607    lat (msec)   : 50=24.17%, 100=67.87%, 250=7.96%
00:36:15.607    cpu          : usr=44.94%, sys=0.77%, ctx=1384, majf=0, minf=9
00:36:15.607    IO depths    : 1=1.2%, 2=2.7%, 4=9.9%, 8=73.1%, 16=13.2%, 32=0.0%, >=64=0.0%
00:36:15.607       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.607       complete  : 0=0.0%, 4=90.2%, 8=5.5%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.607       issued rwts: total=2437,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.607       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.607  filename1: (groupid=0, jobs=1): err= 0: pid=130484: Fri Dec 13 19:20:45 2024
00:36:15.607    read: IOPS=241, BW=966KiB/s (989kB/s)(9676KiB/10017msec)
00:36:15.607      slat (usec): min=3, max=8031, avg=20.51, stdev=244.65
00:36:15.607      clat (msec): min=31, max=158, avg=66.04, stdev=19.31
00:36:15.607       lat (msec): min=31, max=158, avg=66.06, stdev=19.31
00:36:15.607      clat percentiles (msec):
00:36:15.607       |  1.00th=[   34],  5.00th=[   39], 10.00th=[   43], 20.00th=[   48],
00:36:15.607       | 30.00th=[   57], 40.00th=[   61], 50.00th=[   63], 60.00th=[   68],
00:36:15.607       | 70.00th=[   72], 80.00th=[   84], 90.00th=[   95], 95.00th=[  103],
00:36:15.607       | 99.00th=[  124], 99.50th=[  128], 99.90th=[  131], 99.95th=[  131],
00:36:15.607       | 99.99th=[  159]
00:36:15.608     bw (  KiB/s): min=  640, max= 1328, per=4.06%, avg=965.20, stdev=163.87, samples=20
00:36:15.608     iops        : min=  160, max=  332, avg=241.30, stdev=40.97, samples=20
00:36:15.608    lat (msec)   : 50=23.56%, 100=70.94%, 250=5.50%
00:36:15.608    cpu          : usr=37.02%, sys=0.40%, ctx=1032, majf=0, minf=9
00:36:15.608    IO depths    : 1=1.2%, 2=2.6%, 4=9.8%, 8=74.0%, 16=12.4%, 32=0.0%, >=64=0.0%
00:36:15.608       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.608       complete  : 0=0.0%, 4=89.9%, 8=5.7%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.608       issued rwts: total=2419,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.608       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.608  filename1: (groupid=0, jobs=1): err= 0: pid=130485: Fri Dec 13 19:20:45 2024
00:36:15.608    read: IOPS=251, BW=1006KiB/s (1030kB/s)(9.85MiB/10025msec)
00:36:15.608      slat (usec): min=6, max=8039, avg=24.91, stdev=298.51
00:36:15.608      clat (msec): min=16, max=139, avg=63.47, stdev=19.31
00:36:15.608       lat (msec): min=16, max=139, avg=63.50, stdev=19.31
00:36:15.608      clat percentiles (msec):
00:36:15.608       |  1.00th=[   20],  5.00th=[   37], 10.00th=[   41], 20.00th=[   48],
00:36:15.608       | 30.00th=[   54], 40.00th=[   59], 50.00th=[   63], 60.00th=[   66],
00:36:15.608       | 70.00th=[   72], 80.00th=[   83], 90.00th=[   89], 95.00th=[   95],
00:36:15.608       | 99.00th=[  120], 99.50th=[  134], 99.90th=[  140], 99.95th=[  140],
00:36:15.608       | 99.99th=[  140]
00:36:15.608     bw (  KiB/s): min=  728, max= 1280, per=4.21%, avg=1000.95, stdev=173.66, samples=20
00:36:15.608     iops        : min=  182, max=  320, avg=250.20, stdev=43.40, samples=20
00:36:15.608    lat (msec)   : 20=1.27%, 50=26.93%, 100=69.85%, 250=1.94%
00:36:15.608    cpu          : usr=35.74%, sys=0.62%, ctx=1191, majf=0, minf=9
00:36:15.608    IO depths    : 1=1.3%, 2=3.1%, 4=12.3%, 8=71.0%, 16=12.4%, 32=0.0%, >=64=0.0%
00:36:15.608       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.608       complete  : 0=0.0%, 4=90.3%, 8=5.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.608       issued rwts: total=2521,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.608       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.608  filename1: (groupid=0, jobs=1): err= 0: pid=130486: Fri Dec 13 19:20:45 2024
00:36:15.608    read: IOPS=264, BW=1058KiB/s (1083kB/s)(10.4MiB/10026msec)
00:36:15.608      slat (usec): min=6, max=8041, avg=23.58, stdev=311.41
00:36:15.608      clat (msec): min=22, max=131, avg=60.33, stdev=19.44
00:36:15.608       lat (msec): min=22, max=131, avg=60.35, stdev=19.45
00:36:15.608      clat percentiles (msec):
00:36:15.608       |  1.00th=[   25],  5.00th=[   36], 10.00th=[   38], 20.00th=[   46],
00:36:15.608       | 30.00th=[   48], 40.00th=[   53], 50.00th=[   59], 60.00th=[   61],
00:36:15.608       | 70.00th=[   66], 80.00th=[   73], 90.00th=[   85], 95.00th=[   96],
00:36:15.608       | 99.00th=[  122], 99.50th=[  123], 99.90th=[  132], 99.95th=[  132],
00:36:15.608       | 99.99th=[  132]
00:36:15.608     bw (  KiB/s): min=  720, max= 1344, per=4.42%, avg=1052.95, stdev=156.44, samples=20
00:36:15.608     iops        : min=  180, max=  336, avg=263.20, stdev=39.11, samples=20
00:36:15.608    lat (msec)   : 50=35.31%, 100=60.09%, 250=4.60%
00:36:15.608    cpu          : usr=32.49%, sys=0.70%, ctx=867, majf=0, minf=9
00:36:15.608    IO depths    : 1=0.5%, 2=1.2%, 4=7.1%, 8=77.7%, 16=13.5%, 32=0.0%, >=64=0.0%
00:36:15.608       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.608       complete  : 0=0.0%, 4=89.6%, 8=6.5%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.608       issued rwts: total=2651,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.608       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.608  filename1: (groupid=0, jobs=1): err= 0: pid=130487: Fri Dec 13 19:20:45 2024
00:36:15.608    read: IOPS=240, BW=963KiB/s (987kB/s)(9640KiB/10006msec)
00:36:15.608      slat (usec): min=3, max=6602, avg=16.36, stdev=152.64
00:36:15.608      clat (msec): min=17, max=142, avg=66.31, stdev=19.16
00:36:15.608       lat (msec): min=17, max=142, avg=66.33, stdev=19.17
00:36:15.608      clat percentiles (msec):
00:36:15.608       |  1.00th=[   24],  5.00th=[   39], 10.00th=[   44], 20.00th=[   53],
00:36:15.608       | 30.00th=[   58], 40.00th=[   61], 50.00th=[   64], 60.00th=[   68],
00:36:15.608       | 70.00th=[   73], 80.00th=[   81], 90.00th=[   92], 95.00th=[  101],
00:36:15.608       | 99.00th=[  125], 99.50th=[  130], 99.90th=[  144], 99.95th=[  144],
00:36:15.608       | 99.99th=[  144]
00:36:15.608     bw (  KiB/s): min=  680, max= 1384, per=4.07%, avg=967.58, stdev=179.49, samples=19
00:36:15.608     iops        : min=  170, max=  346, avg=241.89, stdev=44.87, samples=19
00:36:15.608    lat (msec)   : 20=0.37%, 50=17.18%, 100=76.80%, 250=5.64%
00:36:15.608    cpu          : usr=42.37%, sys=0.73%, ctx=1364, majf=0, minf=9
00:36:15.608    IO depths    : 1=1.8%, 2=4.1%, 4=12.2%, 8=69.8%, 16=12.0%, 32=0.0%, >=64=0.0%
00:36:15.608       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.608       complete  : 0=0.0%, 4=91.0%, 8=4.6%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.608       issued rwts: total=2410,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.608       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.608  filename2: (groupid=0, jobs=1): err= 0: pid=130488: Fri Dec 13 19:20:45 2024
00:36:15.608    read: IOPS=238, BW=954KiB/s (977kB/s)(9560KiB/10017msec)
00:36:15.608      slat (usec): min=3, max=9026, avg=29.33, stdev=375.81
00:36:15.608      clat (msec): min=17, max=122, avg=66.84, stdev=18.62
00:36:15.608       lat (msec): min=17, max=122, avg=66.87, stdev=18.63
00:36:15.608      clat percentiles (msec):
00:36:15.608       |  1.00th=[   29],  5.00th=[   37], 10.00th=[   45], 20.00th=[   50],
00:36:15.608       | 30.00th=[   60], 40.00th=[   61], 50.00th=[   64], 60.00th=[   71],
00:36:15.608       | 70.00th=[   78], 80.00th=[   84], 90.00th=[   93], 95.00th=[   99],
00:36:15.608       | 99.00th=[  114], 99.50th=[  115], 99.90th=[  123], 99.95th=[  123],
00:36:15.608       | 99.99th=[  123]
00:36:15.608     bw (  KiB/s): min=  720, max= 1376, per=4.00%, avg=951.75, stdev=168.76, samples=20
00:36:15.608     iops        : min=  180, max=  344, avg=237.90, stdev=42.23, samples=20
00:36:15.608    lat (msec)   : 20=0.42%, 50=20.54%, 100=74.94%, 250=4.10%
00:36:15.608    cpu          : usr=37.28%, sys=0.56%, ctx=1021, majf=0, minf=9
00:36:15.608    IO depths    : 1=1.6%, 2=3.9%, 4=12.6%, 8=70.2%, 16=11.8%, 32=0.0%, >=64=0.0%
00:36:15.608       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.608       complete  : 0=0.0%, 4=90.8%, 8=4.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.608       issued rwts: total=2390,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.608       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.608  filename2: (groupid=0, jobs=1): err= 0: pid=130489: Fri Dec 13 19:20:45 2024
00:36:15.608    read: IOPS=264, BW=1059KiB/s (1084kB/s)(10.4MiB/10048msec)
00:36:15.608      slat (usec): min=6, max=8023, avg=16.57, stdev=173.89
00:36:15.608      clat (msec): min=4, max=132, avg=60.32, stdev=18.47
00:36:15.608       lat (msec): min=5, max=132, avg=60.33, stdev=18.48
00:36:15.608      clat percentiles (msec):
00:36:15.608       |  1.00th=[    6],  5.00th=[   35], 10.00th=[   39], 20.00th=[   46],
00:36:15.608       | 30.00th=[   48], 40.00th=[   58], 50.00th=[   61], 60.00th=[   64],
00:36:15.608       | 70.00th=[   70], 80.00th=[   74], 90.00th=[   84], 95.00th=[   93],
00:36:15.608       | 99.00th=[  103], 99.50th=[  108], 99.90th=[  133], 99.95th=[  133],
00:36:15.608       | 99.99th=[  133]
00:36:15.608     bw (  KiB/s): min=  656, max= 1526, per=4.44%, avg=1057.10, stdev=187.81, samples=20
00:36:15.608     iops        : min=  164, max=  381, avg=264.25, stdev=46.89, samples=20
00:36:15.608    lat (msec)   : 10=1.80%, 20=0.60%, 50=29.55%, 100=67.03%, 250=1.02%
00:36:15.608    cpu          : usr=35.12%, sys=0.53%, ctx=991, majf=0, minf=9
00:36:15.608    IO depths    : 1=0.9%, 2=2.0%, 4=8.9%, 8=75.2%, 16=13.0%, 32=0.0%, >=64=0.0%
00:36:15.608       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.608       complete  : 0=0.0%, 4=89.8%, 8=6.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.608       issued rwts: total=2660,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.608       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.608  filename2: (groupid=0, jobs=1): err= 0: pid=130490: Fri Dec 13 19:20:45 2024
00:36:15.608    read: IOPS=250, BW=1003KiB/s (1027kB/s)(9.80MiB/10012msec)
00:36:15.608      slat (usec): min=3, max=8034, avg=19.14, stdev=202.72
00:36:15.608      clat (msec): min=23, max=142, avg=63.69, stdev=20.02
00:36:15.608       lat (msec): min=23, max=142, avg=63.71, stdev=20.02
00:36:15.608      clat percentiles (msec):
00:36:15.608       |  1.00th=[   29],  5.00th=[   37], 10.00th=[   41], 20.00th=[   47],
00:36:15.608       | 30.00th=[   53], 40.00th=[   59], 50.00th=[   62], 60.00th=[   65],
00:36:15.608       | 70.00th=[   72], 80.00th=[   82], 90.00th=[   91], 95.00th=[  101],
00:36:15.608       | 99.00th=[  131], 99.50th=[  142], 99.90th=[  144], 99.95th=[  144],
00:36:15.608       | 99.99th=[  144]
00:36:15.608     bw (  KiB/s): min=  640, max= 1362, per=4.19%, avg=997.70, stdev=193.98, samples=20
00:36:15.608     iops        : min=  160, max=  340, avg=249.40, stdev=48.45, samples=20
00:36:15.608    lat (msec)   : 50=26.61%, 100=68.80%, 250=4.58%
00:36:15.608    cpu          : usr=42.26%, sys=0.71%, ctx=1520, majf=0, minf=9
00:36:15.608    IO depths    : 1=1.8%, 2=4.3%, 4=13.7%, 8=68.8%, 16=11.4%, 32=0.0%, >=64=0.0%
00:36:15.608       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.608       complete  : 0=0.0%, 4=91.0%, 8=4.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.608       issued rwts: total=2510,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.608       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.608  filename2: (groupid=0, jobs=1): err= 0: pid=130491: Fri Dec 13 19:20:45 2024
00:36:15.608    read: IOPS=226, BW=907KiB/s (929kB/s)(9084KiB/10013msec)
00:36:15.608      slat (usec): min=3, max=8023, avg=22.18, stdev=260.17
00:36:15.608      clat (msec): min=19, max=135, avg=70.34, stdev=19.73
00:36:15.608       lat (msec): min=19, max=135, avg=70.37, stdev=19.72
00:36:15.608      clat percentiles (msec):
00:36:15.608       |  1.00th=[   29],  5.00th=[   44], 10.00th=[   48], 20.00th=[   57],
00:36:15.608       | 30.00th=[   60], 40.00th=[   63], 50.00th=[   69], 60.00th=[   72],
00:36:15.608       | 70.00th=[   81], 80.00th=[   85], 90.00th=[   96], 95.00th=[  107],
00:36:15.608       | 99.00th=[  129], 99.50th=[  136], 99.90th=[  136], 99.95th=[  136],
00:36:15.608       | 99.99th=[  136]
00:36:15.608     bw (  KiB/s): min=  640, max= 1048, per=3.80%, avg=904.50, stdev=112.05, samples=20
00:36:15.608     iops        : min=  160, max=  262, avg=226.10, stdev=27.98, samples=20
00:36:15.608    lat (msec)   : 20=0.18%, 50=15.02%, 100=76.62%, 250=8.19%
00:36:15.608    cpu          : usr=36.10%, sys=0.63%, ctx=1012, majf=0, minf=9
00:36:15.608    IO depths    : 1=1.9%, 2=4.8%, 4=14.5%, 8=67.3%, 16=11.5%, 32=0.0%, >=64=0.0%
00:36:15.608       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.608       complete  : 0=0.0%, 4=91.3%, 8=3.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.608       issued rwts: total=2271,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.608       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.608  filename2: (groupid=0, jobs=1): err= 0: pid=130492: Fri Dec 13 19:20:45 2024
00:36:15.608    read: IOPS=295, BW=1183KiB/s (1212kB/s)(11.6MiB/10039msec)
00:36:15.608      slat (usec): min=3, max=8018, avg=17.18, stdev=208.91
00:36:15.608      clat (msec): min=4, max=114, avg=53.96, stdev=17.97
00:36:15.608       lat (msec): min=4, max=114, avg=53.97, stdev=17.98
00:36:15.608      clat percentiles (msec):
00:36:15.608       |  1.00th=[    7],  5.00th=[   30], 10.00th=[   36], 20.00th=[   41],
00:36:15.608       | 30.00th=[   45], 40.00th=[   47], 50.00th=[   51], 60.00th=[   58],
00:36:15.608       | 70.00th=[   63], 80.00th=[   70], 90.00th=[   75], 95.00th=[   84],
00:36:15.609       | 99.00th=[  106], 99.50th=[  112], 99.90th=[  115], 99.95th=[  115],
00:36:15.609       | 99.99th=[  115]
00:36:15.609     bw (  KiB/s): min=  736, max= 1920, per=4.97%, avg=1181.60, stdev=247.07, samples=20
00:36:15.609     iops        : min=  184, max=  480, avg=295.40, stdev=61.77, samples=20
00:36:15.609    lat (msec)   : 10=2.15%, 20=1.08%, 50=45.35%, 100=50.00%, 250=1.41%
00:36:15.609    cpu          : usr=43.37%, sys=0.73%, ctx=1158, majf=0, minf=9
00:36:15.609    IO depths    : 1=0.5%, 2=1.1%, 4=7.1%, 8=77.8%, 16=13.4%, 32=0.0%, >=64=0.0%
00:36:15.609       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.609       complete  : 0=0.0%, 4=89.3%, 8=6.5%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.609       issued rwts: total=2970,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.609       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.609  filename2: (groupid=0, jobs=1): err= 0: pid=130493: Fri Dec 13 19:20:45 2024
00:36:15.609    read: IOPS=232, BW=929KiB/s (951kB/s)(9288KiB/10001msec)
00:36:15.609      slat (usec): min=5, max=8033, avg=21.97, stdev=249.60
00:36:15.609      clat (msec): min=23, max=145, avg=68.75, stdev=19.02
00:36:15.609       lat (msec): min=23, max=145, avg=68.77, stdev=19.01
00:36:15.609      clat percentiles (msec):
00:36:15.609       |  1.00th=[   35],  5.00th=[   45], 10.00th=[   48], 20.00th=[   56],
00:36:15.609       | 30.00th=[   61], 40.00th=[   61], 50.00th=[   67], 60.00th=[   71],
00:36:15.609       | 70.00th=[   73], 80.00th=[   83], 90.00th=[   94], 95.00th=[  107],
00:36:15.609       | 99.00th=[  132], 99.50th=[  144], 99.90th=[  146], 99.95th=[  146],
00:36:15.609       | 99.99th=[  146]
00:36:15.609     bw (  KiB/s): min=  640, max= 1152, per=3.87%, avg=921.16, stdev=157.35, samples=19
00:36:15.609     iops        : min=  160, max=  288, avg=230.26, stdev=39.38, samples=19
00:36:15.609    lat (msec)   : 50=15.81%, 100=77.95%, 250=6.24%
00:36:15.609    cpu          : usr=34.59%, sys=0.52%, ctx=892, majf=0, minf=9
00:36:15.609    IO depths    : 1=2.0%, 2=4.5%, 4=12.7%, 8=69.4%, 16=11.4%, 32=0.0%, >=64=0.0%
00:36:15.609       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.609       complete  : 0=0.0%, 4=91.1%, 8=4.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.609       issued rwts: total=2322,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.609       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.609  filename2: (groupid=0, jobs=1): err= 0: pid=130494: Fri Dec 13 19:20:45 2024
00:36:15.609    read: IOPS=234, BW=939KiB/s (961kB/s)(9408KiB/10023msec)
00:36:15.609      slat (usec): min=5, max=8042, avg=15.69, stdev=165.74
00:36:15.609      clat (msec): min=21, max=126, avg=68.02, stdev=18.61
00:36:15.609       lat (msec): min=21, max=126, avg=68.03, stdev=18.60
00:36:15.609      clat percentiles (msec):
00:36:15.609       |  1.00th=[   24],  5.00th=[   39], 10.00th=[   47], 20.00th=[   53],
00:36:15.609       | 30.00th=[   61], 40.00th=[   61], 50.00th=[   65], 60.00th=[   72],
00:36:15.609       | 70.00th=[   75], 80.00th=[   85], 90.00th=[   95], 95.00th=[   99],
00:36:15.609       | 99.00th=[  120], 99.50th=[  122], 99.90th=[  127], 99.95th=[  127],
00:36:15.609       | 99.99th=[  127]
00:36:15.609     bw (  KiB/s): min=  640, max= 1104, per=3.94%, avg=938.50, stdev=114.61, samples=20
00:36:15.609     iops        : min=  160, max=  276, avg=234.60, stdev=28.63, samples=20
00:36:15.609    lat (msec)   : 50=18.03%, 100=78.27%, 250=3.70%
00:36:15.609    cpu          : usr=32.63%, sys=0.53%, ctx=867, majf=0, minf=9
00:36:15.609    IO depths    : 1=1.1%, 2=2.9%, 4=11.8%, 8=72.0%, 16=12.2%, 32=0.0%, >=64=0.0%
00:36:15.609       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.609       complete  : 0=0.0%, 4=90.3%, 8=4.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.609       issued rwts: total=2352,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.609       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.609  filename2: (groupid=0, jobs=1): err= 0: pid=130495: Fri Dec 13 19:20:45 2024
00:36:15.609    read: IOPS=279, BW=1119KiB/s (1146kB/s)(11.0MiB/10037msec)
00:36:15.609      slat (nsec): min=5220, max=58346, avg=12366.65, stdev=7838.43
00:36:15.609      clat (msec): min=8, max=154, avg=57.09, stdev=18.12
00:36:15.609       lat (msec): min=8, max=154, avg=57.10, stdev=18.12
00:36:15.609      clat percentiles (msec):
00:36:15.609       |  1.00th=[   20],  5.00th=[   35], 10.00th=[   39], 20.00th=[   43],
00:36:15.609       | 30.00th=[   47], 40.00th=[   50], 50.00th=[   56], 60.00th=[   61],
00:36:15.609       | 70.00th=[   64], 80.00th=[   70], 90.00th=[   82], 95.00th=[   89],
00:36:15.609       | 99.00th=[  109], 99.50th=[  130], 99.90th=[  155], 99.95th=[  155],
00:36:15.609       | 99.99th=[  155]
00:36:15.609     bw (  KiB/s): min=  640, max= 1344, per=4.69%, avg=1115.35, stdev=173.88, samples=20
00:36:15.609     iops        : min=  160, max=  336, avg=278.80, stdev=43.47, samples=20
00:36:15.609    lat (msec)   : 10=0.57%, 20=0.57%, 50=40.40%, 100=56.04%, 250=2.42%
00:36:15.609    cpu          : usr=42.81%, sys=0.72%, ctx=1255, majf=0, minf=9
00:36:15.609    IO depths    : 1=1.4%, 2=2.9%, 4=10.6%, 8=73.0%, 16=12.1%, 32=0.0%, >=64=0.0%
00:36:15.609       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.609       complete  : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:15.609       issued rwts: total=2807,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:15.609       latency   : target=0, window=0, percentile=100.00%, depth=16
00:36:15.609  
00:36:15.609  Run status group 0 (all jobs):
00:36:15.609     READ: bw=23.2MiB/s (24.4MB/s), 892KiB/s-1183KiB/s (913kB/s-1212kB/s), io=233MiB (245MB), run=10001-10049msec
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@"
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@"
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@"
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:15.609   19:20:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@"
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:15.609  bdev_null0
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:15.609  [2024-12-13 19:20:46.040947] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@"
00:36:15.609   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:15.610  bdev_null1
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62
00:36:15.610    19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1
00:36:15.610    19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1
00:36:15.610    19:20:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=()
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:36:15.610    19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf
00:36:15.610    19:20:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config
00:36:15.610    19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:36:15.610    19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat
00:36:15.610    19:20:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:36:15.610    19:20:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:36:15.610  {
00:36:15.610    "params": {
00:36:15.610      "name": "Nvme$subsystem",
00:36:15.610      "trtype": "$TEST_TRANSPORT",
00:36:15.610      "traddr": "$NVMF_FIRST_TARGET_IP",
00:36:15.610      "adrfam": "ipv4",
00:36:15.610      "trsvcid": "$NVMF_PORT",
00:36:15.610      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:36:15.610      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:36:15.610      "hdgst": ${hdgst:-false},
00:36:15.610      "ddgst": ${ddgst:-false}
00:36:15.610    },
00:36:15.610    "method": "bdev_nvme_attach_controller"
00:36:15.610  }
00:36:15.610  EOF
00:36:15.610  )")
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers
00:36:15.610     19:20:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib=
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:36:15.610    19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 ))
00:36:15.610    19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files ))
00:36:15.610    19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat
00:36:15.610    19:20:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:36:15.610    19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:36:15.610    19:20:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:36:15.610  {
00:36:15.610    "params": {
00:36:15.610      "name": "Nvme$subsystem",
00:36:15.610      "trtype": "$TEST_TRANSPORT",
00:36:15.610      "traddr": "$NVMF_FIRST_TARGET_IP",
00:36:15.610      "adrfam": "ipv4",
00:36:15.610      "trsvcid": "$NVMF_PORT",
00:36:15.610      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:36:15.610      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:36:15.610      "hdgst": ${hdgst:-false},
00:36:15.610      "ddgst": ${ddgst:-false}
00:36:15.610    },
00:36:15.610    "method": "bdev_nvme_attach_controller"
00:36:15.610  }
00:36:15.610  EOF
00:36:15.610  )")
00:36:15.610    19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan
00:36:15.610    19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:36:15.610    19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ ))
00:36:15.610    19:20:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files ))
00:36:15.610     19:20:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat
00:36:15.610    19:20:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq .
00:36:15.610     19:20:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=,
00:36:15.610     19:20:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:36:15.610    "params": {
00:36:15.610      "name": "Nvme0",
00:36:15.610      "trtype": "tcp",
00:36:15.610      "traddr": "10.0.0.3",
00:36:15.610      "adrfam": "ipv4",
00:36:15.610      "trsvcid": "4420",
00:36:15.610      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:36:15.610      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:36:15.610      "hdgst": false,
00:36:15.610      "ddgst": false
00:36:15.610    },
00:36:15.610    "method": "bdev_nvme_attach_controller"
00:36:15.610  },{
00:36:15.610    "params": {
00:36:15.610      "name": "Nvme1",
00:36:15.610      "trtype": "tcp",
00:36:15.610      "traddr": "10.0.0.3",
00:36:15.610      "adrfam": "ipv4",
00:36:15.610      "trsvcid": "4420",
00:36:15.610      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:36:15.610      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:36:15.610      "hdgst": false,
00:36:15.610      "ddgst": false
00:36:15.610    },
00:36:15.610    "method": "bdev_nvme_attach_controller"
00:36:15.610  }'
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:36:15.610    19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:36:15.610    19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:36:15.610    19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:36:15.610   19:20:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:36:15.610  filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8
00:36:15.610  ...
00:36:15.610  filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8
00:36:15.610  ...
00:36:15.610  fio-3.35
00:36:15.610  Starting 4 threads
00:36:20.876  
00:36:20.876  filename0: (groupid=0, jobs=1): err= 0: pid=130622: Fri Dec 13 19:20:52 2024
00:36:20.876    read: IOPS=2261, BW=17.7MiB/s (18.5MB/s)(88.4MiB/5002msec)
00:36:20.876      slat (nsec): min=3746, max=82316, avg=15573.72, stdev=5882.95
00:36:20.876      clat (usec): min=2585, max=4972, avg=3461.92, stdev=171.29
00:36:20.876       lat (usec): min=2597, max=4989, avg=3477.50, stdev=171.35
00:36:20.876      clat percentiles (usec):
00:36:20.876       |  1.00th=[ 3195],  5.00th=[ 3261], 10.00th=[ 3294], 20.00th=[ 3326],
00:36:20.876       | 30.00th=[ 3359], 40.00th=[ 3392], 50.00th=[ 3425], 60.00th=[ 3458],
00:36:20.876       | 70.00th=[ 3523], 80.00th=[ 3556], 90.00th=[ 3654], 95.00th=[ 3785],
00:36:20.876       | 99.00th=[ 4047], 99.50th=[ 4146], 99.90th=[ 4621], 99.95th=[ 4883],
00:36:20.876       | 99.99th=[ 4948]
00:36:20.876     bw (  KiB/s): min=17664, max=18544, per=24.99%, avg=18090.00, stdev=308.14, samples=10
00:36:20.876     iops        : min= 2208, max= 2318, avg=2261.20, stdev=38.47, samples=10
00:36:20.876    lat (msec)   : 4=98.75%, 10=1.25%
00:36:20.876    cpu          : usr=93.94%, sys=4.72%, ctx=48, majf=0, minf=9
00:36:20.876    IO depths    : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0%
00:36:20.876       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:20.876       complete  : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:20.876       issued rwts: total=11312,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:20.876       latency   : target=0, window=0, percentile=100.00%, depth=8
00:36:20.876  filename0: (groupid=0, jobs=1): err= 0: pid=130623: Fri Dec 13 19:20:52 2024
00:36:20.876    read: IOPS=2265, BW=17.7MiB/s (18.6MB/s)(88.5MiB/5001msec)
00:36:20.876      slat (nsec): min=5957, max=65420, avg=9183.89, stdev=5579.46
00:36:20.876      clat (usec): min=939, max=6030, avg=3481.57, stdev=222.84
00:36:20.876       lat (usec): min=946, max=6055, avg=3490.76, stdev=223.02
00:36:20.876      clat percentiles (usec):
00:36:20.876       |  1.00th=[ 3228],  5.00th=[ 3294], 10.00th=[ 3326], 20.00th=[ 3359],
00:36:20.876       | 30.00th=[ 3392], 40.00th=[ 3425], 50.00th=[ 3458], 60.00th=[ 3490],
00:36:20.876       | 70.00th=[ 3523], 80.00th=[ 3589], 90.00th=[ 3687], 95.00th=[ 3785],
00:36:20.876       | 99.00th=[ 4015], 99.50th=[ 4146], 99.90th=[ 4948], 99.95th=[ 5997],
00:36:20.876       | 99.99th=[ 5997]
00:36:20.876     bw (  KiB/s): min=17664, max=18688, per=25.05%, avg=18133.33, stdev=362.04, samples=9
00:36:20.876     iops        : min= 2208, max= 2336, avg=2266.67, stdev=45.25, samples=9
00:36:20.876    lat (usec)   : 1000=0.09%
00:36:20.876    lat (msec)   : 2=0.26%, 4=98.53%, 10=1.12%
00:36:20.876    cpu          : usr=95.32%, sys=3.56%, ctx=7, majf=0, minf=9
00:36:20.876    IO depths    : 1=11.6%, 2=24.9%, 4=50.1%, 8=13.4%, 16=0.0%, 32=0.0%, >=64=0.0%
00:36:20.876       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:20.876       complete  : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:20.876       issued rwts: total=11328,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:20.876       latency   : target=0, window=0, percentile=100.00%, depth=8
00:36:20.876  filename1: (groupid=0, jobs=1): err= 0: pid=130624: Fri Dec 13 19:20:52 2024
00:36:20.876    read: IOPS=2261, BW=17.7MiB/s (18.5MB/s)(88.4MiB/5002msec)
00:36:20.876      slat (nsec): min=3560, max=80417, avg=16013.80, stdev=6602.43
00:36:20.876      clat (usec): min=2568, max=5127, avg=3458.84, stdev=171.92
00:36:20.876       lat (usec): min=2579, max=5152, avg=3474.85, stdev=172.03
00:36:20.876      clat percentiles (usec):
00:36:20.876       |  1.00th=[ 3195],  5.00th=[ 3261], 10.00th=[ 3294], 20.00th=[ 3326],
00:36:20.876       | 30.00th=[ 3359], 40.00th=[ 3392], 50.00th=[ 3425], 60.00th=[ 3458],
00:36:20.876       | 70.00th=[ 3490], 80.00th=[ 3556], 90.00th=[ 3654], 95.00th=[ 3752],
00:36:20.876       | 99.00th=[ 4015], 99.50th=[ 4146], 99.90th=[ 4948], 99.95th=[ 5080],
00:36:20.876       | 99.99th=[ 5145]
00:36:20.876     bw (  KiB/s): min=17664, max=18560, per=24.98%, avg=18086.40, stdev=307.97, samples=10
00:36:20.876     iops        : min= 2208, max= 2320, avg=2260.80, stdev=38.50, samples=10
00:36:20.876    lat (msec)   : 4=98.94%, 10=1.06%
00:36:20.876    cpu          : usr=94.64%, sys=4.06%, ctx=4, majf=0, minf=9
00:36:20.876    IO depths    : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0%
00:36:20.876       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:20.876       complete  : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:20.876       issued rwts: total=11312,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:20.876       latency   : target=0, window=0, percentile=100.00%, depth=8
00:36:20.876  filename1: (groupid=0, jobs=1): err= 0: pid=130625: Fri Dec 13 19:20:52 2024
00:36:20.876    read: IOPS=2261, BW=17.7MiB/s (18.5MB/s)(88.4MiB/5001msec)
00:36:20.876      slat (nsec): min=3311, max=70692, avg=15171.11, stdev=6024.91
00:36:20.876      clat (usec): min=2237, max=5505, avg=3461.64, stdev=185.43
00:36:20.876       lat (usec): min=2248, max=5512, avg=3476.81, stdev=185.63
00:36:20.876      clat percentiles (usec):
00:36:20.876       |  1.00th=[ 3195],  5.00th=[ 3261], 10.00th=[ 3294], 20.00th=[ 3326],
00:36:20.876       | 30.00th=[ 3359], 40.00th=[ 3392], 50.00th=[ 3425], 60.00th=[ 3458],
00:36:20.876       | 70.00th=[ 3523], 80.00th=[ 3556], 90.00th=[ 3654], 95.00th=[ 3785],
00:36:20.876       | 99.00th=[ 4015], 99.50th=[ 4146], 99.90th=[ 5211], 99.95th=[ 5342],
00:36:20.876       | 99.99th=[ 5473]
00:36:20.876     bw (  KiB/s): min=17664, max=18560, per=24.99%, avg=18090.00, stdev=310.99, samples=10
00:36:20.876     iops        : min= 2208, max= 2320, avg=2261.20, stdev=38.83, samples=10
00:36:20.876    lat (msec)   : 4=98.74%, 10=1.26%
00:36:20.876    cpu          : usr=94.98%, sys=3.74%, ctx=6, majf=0, minf=9
00:36:20.876    IO depths    : 1=12.0%, 2=25.0%, 4=50.0%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:36:20.876       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:20.876       complete  : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:20.876       issued rwts: total=11312,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:20.876       latency   : target=0, window=0, percentile=100.00%, depth=8
00:36:20.876  
00:36:20.876  Run status group 0 (all jobs):
00:36:20.876     READ: bw=70.7MiB/s (74.1MB/s), 17.7MiB/s-17.7MiB/s (18.5MB/s-18.6MB/s), io=354MiB (371MB), run=5001-5002msec
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@"
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@"
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:20.876  ************************************
00:36:20.876  END TEST fio_dif_rand_params
00:36:20.876  ************************************
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:20.876  
00:36:20.876  real	0m23.953s
00:36:20.876  user	2m7.348s
00:36:20.876  sys	0m3.928s
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable
00:36:20.876   19:20:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:36:20.876   19:20:52 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest
00:36:20.876   19:20:52 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:36:20.876   19:20:52 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable
00:36:20.876   19:20:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:36:20.877  ************************************
00:36:20.877  START TEST fio_dif_digest
00:36:20.877  ************************************
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@"
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x
00:36:20.877  bdev_null0
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x
00:36:20.877  [2024-12-13 19:20:52.416678] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62
00:36:20.877    19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0
00:36:20.877    19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0
00:36:20.877    19:20:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=()
00:36:20.877    19:20:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config
00:36:20.877    19:20:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:36:20.877    19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf
00:36:20.877    19:20:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:36:20.877  {
00:36:20.877    "params": {
00:36:20.877      "name": "Nvme$subsystem",
00:36:20.877      "trtype": "$TEST_TRANSPORT",
00:36:20.877      "traddr": "$NVMF_FIRST_TARGET_IP",
00:36:20.877      "adrfam": "ipv4",
00:36:20.877      "trsvcid": "$NVMF_PORT",
00:36:20.877      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:36:20.877      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:36:20.877      "hdgst": ${hdgst:-false},
00:36:20.877      "ddgst": ${ddgst:-false}
00:36:20.877    },
00:36:20.877    "method": "bdev_nvme_attach_controller"
00:36:20.877  }
00:36:20.877  EOF
00:36:20.877  )")
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:36:20.877    19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file
00:36:20.877    19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers
00:36:20.877     19:20:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib=
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:36:20.877    19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 ))
00:36:20.877    19:20:52 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files ))
00:36:20.877    19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:36:20.877    19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan
00:36:20.877    19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:36:20.877    19:20:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq .
00:36:20.877     19:20:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=,
00:36:20.877     19:20:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:36:20.877    "params": {
00:36:20.877      "name": "Nvme0",
00:36:20.877      "trtype": "tcp",
00:36:20.877      "traddr": "10.0.0.3",
00:36:20.877      "adrfam": "ipv4",
00:36:20.877      "trsvcid": "4420",
00:36:20.877      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:36:20.877      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:36:20.877      "hdgst": true,
00:36:20.877      "ddgst": true
00:36:20.877    },
00:36:20.877    "method": "bdev_nvme_attach_controller"
00:36:20.877  }'
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:36:20.877    19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:36:20.877    19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:36:20.877    19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:36:20.877   19:20:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:36:20.877  filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3
00:36:20.877  ...
00:36:20.877  fio-3.35
00:36:20.877  Starting 3 threads
00:36:33.082  
00:36:33.082  filename0: (groupid=0, jobs=1): err= 0: pid=130726: Fri Dec 13 19:21:03 2024
00:36:33.082    read: IOPS=272, BW=34.1MiB/s (35.7MB/s)(341MiB/10007msec)
00:36:33.082      slat (nsec): min=5846, max=86374, avg=15921.81, stdev=6333.04
00:36:33.082      clat (usec): min=7530, max=92077, avg=10983.64, stdev=3949.28
00:36:33.082       lat (usec): min=7544, max=92089, avg=10999.56, stdev=3949.26
00:36:33.082      clat percentiles (usec):
00:36:33.082       |  1.00th=[ 8848],  5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[ 9896],
00:36:33.082       | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945],
00:36:33.082       | 70.00th=[11207], 80.00th=[11338], 90.00th=[11731], 95.00th=[12125],
00:36:33.082       | 99.00th=[12911], 99.50th=[50594], 99.90th=[52167], 99.95th=[91751],
00:36:33.082       | 99.99th=[91751]
00:36:33.082     bw (  KiB/s): min=29952, max=37632, per=37.77%, avg=34839.21, stdev=2096.16, samples=19
00:36:33.082     iops        : min=  234, max=  294, avg=272.16, stdev=16.37, samples=19
00:36:33.082    lat (msec)   : 10=21.66%, 20=77.64%, 100=0.70%
00:36:33.082    cpu          : usr=91.20%, sys=6.53%, ctx=14, majf=0, minf=9
00:36:33.082    IO depths    : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:36:33.082       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:33.082       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:33.082       issued rwts: total=2728,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:33.082       latency   : target=0, window=0, percentile=100.00%, depth=3
00:36:33.082  filename0: (groupid=0, jobs=1): err= 0: pid=130727: Fri Dec 13 19:21:03 2024
00:36:33.082    read: IOPS=240, BW=30.0MiB/s (31.5MB/s)(300MiB/10005msec)
00:36:33.082      slat (nsec): min=6275, max=59702, avg=17438.70, stdev=6637.66
00:36:33.082      clat (usec): min=6623, max=16146, avg=12468.94, stdev=1463.60
00:36:33.082       lat (usec): min=6641, max=16153, avg=12486.38, stdev=1463.47
00:36:33.082      clat percentiles (usec):
00:36:33.082       |  1.00th=[ 7308],  5.00th=[10159], 10.00th=[11076], 20.00th=[11600],
00:36:33.082       | 30.00th=[11994], 40.00th=[12387], 50.00th=[12649], 60.00th=[12911],
00:36:33.082       | 70.00th=[13173], 80.00th=[13566], 90.00th=[14091], 95.00th=[14484],
00:36:33.082       | 99.00th=[15139], 99.50th=[15533], 99.90th=[15926], 99.95th=[15926],
00:36:33.082       | 99.99th=[16188]
00:36:33.082     bw (  KiB/s): min=28672, max=33280, per=33.37%, avg=30784.21, stdev=1407.01, samples=19
00:36:33.082     iops        : min=  224, max=  260, avg=240.47, stdev=11.02, samples=19
00:36:33.082    lat (msec)   : 10=4.95%, 20=95.05%
00:36:33.082    cpu          : usr=94.43%, sys=4.16%, ctx=659, majf=0, minf=9
00:36:33.082    IO depths    : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:36:33.082       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:33.082       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:33.082       issued rwts: total=2403,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:33.082       latency   : target=0, window=0, percentile=100.00%, depth=3
00:36:33.082  filename0: (groupid=0, jobs=1): err= 0: pid=130728: Fri Dec 13 19:21:03 2024
00:36:33.082    read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(264MiB/10045msec)
00:36:33.082      slat (nsec): min=6561, max=80059, avg=15653.05, stdev=6614.05
00:36:33.082      clat (usec): min=8020, max=49156, avg=14257.58, stdev=1688.32
00:36:33.082       lat (usec): min=8038, max=49167, avg=14273.23, stdev=1688.66
00:36:33.082      clat percentiles (usec):
00:36:33.082       |  1.00th=[ 8717],  5.00th=[12518], 10.00th=[13173], 20.00th=[13698],
00:36:33.082       | 30.00th=[13960], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615],
00:36:33.082       | 70.00th=[14877], 80.00th=[15139], 90.00th=[15533], 95.00th=[15926],
00:36:33.082       | 99.00th=[16581], 99.50th=[16909], 99.90th=[17433], 99.95th=[46924],
00:36:33.082       | 99.99th=[49021]
00:36:33.082     bw (  KiB/s): min=25344, max=28928, per=29.22%, avg=26956.80, stdev=1067.21, samples=20
00:36:33.082     iops        : min=  198, max=  226, avg=210.60, stdev= 8.34, samples=20
00:36:33.082    lat (msec)   : 10=3.51%, 20=96.39%, 50=0.09%
00:36:33.082    cpu          : usr=94.13%, sys=4.49%, ctx=13, majf=0, minf=0
00:36:33.082    IO depths    : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:36:33.082       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:33.082       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:33.082       issued rwts: total=2108,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:33.082       latency   : target=0, window=0, percentile=100.00%, depth=3
00:36:33.082  
00:36:33.082  Run status group 0 (all jobs):
00:36:33.082     READ: bw=90.1MiB/s (94.5MB/s), 26.2MiB/s-34.1MiB/s (27.5MB/s-35.7MB/s), io=905MiB (949MB), run=10005-10045msec
00:36:33.082   19:21:03 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0
00:36:33.082   19:21:03 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub
00:36:33.082   19:21:03 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@"
00:36:33.082   19:21:03 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0
00:36:33.082   19:21:03 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0
00:36:33.082   19:21:03 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:36:33.082   19:21:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:33.082   19:21:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x
00:36:33.082   19:21:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:33.082   19:21:03 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:36:33.082   19:21:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:33.082   19:21:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x
00:36:33.082  ************************************
00:36:33.082  END TEST fio_dif_digest
00:36:33.082  ************************************
00:36:33.082   19:21:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:33.082  
00:36:33.082  real	0m11.155s
00:36:33.082  user	0m28.772s
00:36:33.082  sys	0m1.836s
00:36:33.082   19:21:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable
00:36:33.083   19:21:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x
00:36:33.083   19:21:03 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT
00:36:33.083   19:21:03 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini
00:36:33.083   19:21:03 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup
00:36:33.083   19:21:03 nvmf_dif -- nvmf/common.sh@121 -- # sync
00:36:33.083   19:21:03 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:36:33.083   19:21:03 nvmf_dif -- nvmf/common.sh@124 -- # set +e
00:36:33.083   19:21:03 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20}
00:36:33.083   19:21:03 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:36:33.083  rmmod nvme_tcp
00:36:33.083  rmmod nvme_fabrics
00:36:33.083  rmmod nvme_keyring
00:36:33.083   19:21:03 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:36:33.083   19:21:03 nvmf_dif -- nvmf/common.sh@128 -- # set -e
00:36:33.083   19:21:03 nvmf_dif -- nvmf/common.sh@129 -- # return 0
00:36:33.083   19:21:03 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 129991 ']'
00:36:33.083   19:21:03 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 129991
00:36:33.083   19:21:03 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 129991 ']'
00:36:33.083   19:21:03 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 129991
00:36:33.083    19:21:03 nvmf_dif -- common/autotest_common.sh@959 -- # uname
00:36:33.083   19:21:03 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:36:33.083    19:21:03 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 129991
00:36:33.083  killing process with pid 129991
00:36:33.083   19:21:03 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:36:33.083   19:21:03 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:36:33.083   19:21:03 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 129991'
00:36:33.083   19:21:03 nvmf_dif -- common/autotest_common.sh@973 -- # kill 129991
00:36:33.083   19:21:03 nvmf_dif -- common/autotest_common.sh@978 -- # wait 129991
00:36:33.083   19:21:03 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']'
00:36:33.083   19:21:03 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:36:33.083  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:36:33.083  Waiting for block devices as requested
00:36:33.083  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:36:33.083  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@297 -- # iptr
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:36:33.083   19:21:04 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:36:33.083    19:21:04 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:36:33.083   19:21:04 nvmf_dif -- nvmf/common.sh@300 -- # return 0
00:36:33.083  
00:36:33.083  real	1m0.528s
00:36:33.083  user	3m52.792s
00:36:33.083  sys	0m14.048s
00:36:33.083  ************************************
00:36:33.083  END TEST nvmf_dif
00:36:33.083  ************************************
00:36:33.083   19:21:04 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable
00:36:33.083   19:21:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:36:33.342   19:21:04  -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh
00:36:33.342   19:21:04  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:36:33.342   19:21:04  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:36:33.342   19:21:04  -- common/autotest_common.sh@10 -- # set +x
00:36:33.342  ************************************
00:36:33.342  START TEST nvmf_abort_qd_sizes
00:36:33.342  ************************************
00:36:33.342   19:21:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh
00:36:33.342  * Looking for test storage...
00:36:33.342  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:36:33.342     19:21:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:36:33.342     19:21:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-:
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-:
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<'
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 ))
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:36:33.342     19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1
00:36:33.342     19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1
00:36:33.342     19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:36:33.342     19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1
00:36:33.342     19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2
00:36:33.342     19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2
00:36:33.342     19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:36:33.342     19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:36:33.342  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:33.342  		--rc genhtml_branch_coverage=1
00:36:33.342  		--rc genhtml_function_coverage=1
00:36:33.342  		--rc genhtml_legend=1
00:36:33.342  		--rc geninfo_all_blocks=1
00:36:33.342  		--rc geninfo_unexecuted_blocks=1
00:36:33.342  		
00:36:33.342  		'
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:36:33.342  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:33.342  		--rc genhtml_branch_coverage=1
00:36:33.342  		--rc genhtml_function_coverage=1
00:36:33.342  		--rc genhtml_legend=1
00:36:33.342  		--rc geninfo_all_blocks=1
00:36:33.342  		--rc geninfo_unexecuted_blocks=1
00:36:33.342  		
00:36:33.342  		'
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:36:33.342  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:33.342  		--rc genhtml_branch_coverage=1
00:36:33.342  		--rc genhtml_function_coverage=1
00:36:33.342  		--rc genhtml_legend=1
00:36:33.342  		--rc geninfo_all_blocks=1
00:36:33.342  		--rc geninfo_unexecuted_blocks=1
00:36:33.342  		
00:36:33.342  		'
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:36:33.342  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:33.342  		--rc genhtml_branch_coverage=1
00:36:33.342  		--rc genhtml_function_coverage=1
00:36:33.342  		--rc genhtml_legend=1
00:36:33.342  		--rc geninfo_all_blocks=1
00:36:33.342  		--rc geninfo_unexecuted_blocks=1
00:36:33.342  		
00:36:33.342  		'
00:36:33.342   19:21:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:36:33.342     19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:36:33.342    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:36:33.343    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:36:33.343    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:36:33.343    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:36:33.343    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:36:33.343    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:36:33.343    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:36:33.343     19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:36:33.602    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:36:33.602    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:36:33.602    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:36:33.602    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:36:33.602    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:36:33.602    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:36:33.602    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:36:33.602     19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob
00:36:33.602     19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:36:33.602     19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:36:33.602     19:21:05 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:36:33.602      19:21:05 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:33.602      19:21:05 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:33.602      19:21:05 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:33.602      19:21:05 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH
00:36:33.602      19:21:05 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:33.602    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0
00:36:33.602    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:36:33.602    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:36:33.602    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:36:33.602    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:36:33.602    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:36:33.602    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:36:33.602  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:36:33.602    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:36:33.602    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:36:33.602    19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:36:33.602    19:21:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]]
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]]
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]]
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]]
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]]
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster
00:36:33.602  Cannot find device "nvmf_init_br"
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster
00:36:33.602  Cannot find device "nvmf_init_br2"
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster
00:36:33.602  Cannot find device "nvmf_tgt_br"
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster
00:36:33.602  Cannot find device "nvmf_tgt_br2"
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down
00:36:33.602  Cannot find device "nvmf_init_br"
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down
00:36:33.602  Cannot find device "nvmf_init_br2"
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down
00:36:33.602  Cannot find device "nvmf_tgt_br"
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down
00:36:33.602  Cannot find device "nvmf_tgt_br2"
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge
00:36:33.602  Cannot find device "nvmf_br"
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true
00:36:33.602   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if
00:36:33.603  Cannot find device "nvmf_init_if"
00:36:33.603   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true
00:36:33.603   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2
00:36:33.603  Cannot find device "nvmf_init_if2"
00:36:33.603   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true
00:36:33.603   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:36:33.603  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:36:33.603   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true
00:36:33.603   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:36:33.603  Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory
00:36:33.603   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true
00:36:33.603   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk
00:36:33.603   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br
00:36:33.603   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2
00:36:33.603   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br
00:36:33.603   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2
00:36:33.603   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk
00:36:33.603   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk
00:36:33.603   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if
00:36:33.603   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2
00:36:33.603   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if
00:36:33.603   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2
00:36:33.603   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up
00:36:33.603   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT'
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT'
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT'
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3
00:36:33.865  PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
00:36:33.865  64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms
00:36:33.865  
00:36:33.865  --- 10.0.0.3 ping statistics ---
00:36:33.865  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:36:33.865  rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4
00:36:33.865  PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
00:36:33.865  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms
00:36:33.865  
00:36:33.865  --- 10.0.0.4 ping statistics ---
00:36:33.865  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:36:33.865  rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1
00:36:33.865  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:36:33.865  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms
00:36:33.865  
00:36:33.865  --- 10.0.0.1 ping statistics ---
00:36:33.865  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:36:33.865  rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2
00:36:33.865  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:36:33.865  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms
00:36:33.865  
00:36:33.865  --- 10.0.0.2 ping statistics ---
00:36:33.865  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:36:33.865  rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']'
00:36:33.865   19:21:05 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:36:34.462  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:36:34.462  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:36:34.721  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:36:34.721   19:21:06 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:36:34.721   19:21:06 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:36:34.721   19:21:06 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:36:34.721   19:21:06 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:36:34.721   19:21:06 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:36:34.721   19:21:06 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:36:34.721   19:21:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf
00:36:34.721   19:21:06 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:36:34.721   19:21:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable
00:36:34.721   19:21:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x
00:36:34.721   19:21:06 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=131376
00:36:34.721   19:21:06 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf
00:36:34.721   19:21:06 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 131376
00:36:34.721   19:21:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 131376 ']'
00:36:34.721   19:21:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:36:34.721   19:21:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100
00:36:34.721   19:21:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:36:34.721  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:36:34.721   19:21:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable
00:36:34.721   19:21:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x
00:36:34.721  [2024-12-13 19:21:06.496075] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:36:34.721  [2024-12-13 19:21:06.496173] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:36:34.980  [2024-12-13 19:21:06.654177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:36:34.980  [2024-12-13 19:21:06.707926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:36:34.980  [2024-12-13 19:21:06.708017] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:36:34.980  [2024-12-13 19:21:06.708034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:36:34.980  [2024-12-13 19:21:06.708046] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:36:34.980  [2024-12-13 19:21:06.708056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:36:34.980  [2024-12-13 19:21:06.709760] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:36:34.980  [2024-12-13 19:21:06.709894] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:36:34.980  [2024-12-13 19:21:06.710003] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:36:34.980  [2024-12-13 19:21:06.710007] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:36:35.914   19:21:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:36:35.914   19:21:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0
00:36:35.914   19:21:07 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:36:35.914   19:21:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable
00:36:35.914   19:21:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x
00:36:35.914   19:21:07 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:36:35.914   19:21:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT
00:36:35.914   19:21:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes
00:36:35.914    19:21:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace
00:36:35.914    19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs
00:36:35.914    19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes
00:36:35.914    19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]]
00:36:35.914    19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02))
00:36:35.914     19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02
00:36:35.914     19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf=
00:36:35.914      19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02
00:36:35.914      19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class
00:36:35.914      19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass
00:36:35.914      19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif
00:36:35.914       19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1
00:36:35.914      19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01
00:36:35.914       19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8
00:36:35.914      19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08
00:36:35.914       19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2
00:36:35.914      19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02
00:36:35.914      19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci
00:36:35.914      19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']'
00:36:35.914      19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D
00:36:35.914      19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02
00:36:35.914      19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}'
00:36:35.914      19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"'
00:36:35.914     19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@")
00:36:35.914     19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0
00:36:35.914     19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i
00:36:35.914     19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[    =~  0000:00:10.0  ]]
00:36:35.914     19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]]
00:36:35.915     19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0
00:36:35.915     19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0
00:36:35.915     19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@")
00:36:35.915     19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0
00:36:35.915     19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i
00:36:35.915     19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[    =~  0000:00:11.0  ]]
00:36:35.915     19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]]
00:36:35.915     19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0
00:36:35.915     19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0
00:36:35.915    19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:36:35.915    19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]]
00:36:35.915     19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s
00:36:35.915    19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:36:35.915    19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:36:35.915    19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:36:35.915    19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]]
00:36:35.915     19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s
00:36:35.915    19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:36:35.915    19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:36:35.915    19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 ))
00:36:35.915    19:21:07 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0
00:36:35.915   19:21:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 ))
00:36:35.915   19:21:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0
00:36:35.915   19:21:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target
00:36:35.915   19:21:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:36:35.915   19:21:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable
00:36:35.915   19:21:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x
00:36:35.915  ************************************
00:36:35.915  START TEST spdk_target_abort
00:36:35.915  ************************************
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:36:35.915  spdk_targetn1
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:36:35.915  [2024-12-13 19:21:07.621545] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:36:35.915  [2024-12-13 19:21:07.661430] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 ***
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64)
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4'
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3'
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420'
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:36:35.915   19:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:36:39.198  Initializing NVMe Controllers
00:36:39.198  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn
00:36:39.198  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0
00:36:39.198  Initialization complete. Launching workers.
00:36:39.198  NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10830, failed: 0
00:36:39.198  CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1177, failed to submit 9653
00:36:39.198  	 success 725, unsuccessful 452, failed 0
00:36:39.198   19:21:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:36:39.198   19:21:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:36:43.391  Initializing NVMe Controllers
00:36:43.391  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn
00:36:43.391  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0
00:36:43.391  Initialization complete. Launching workers.
00:36:43.391  NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5962, failed: 0
00:36:43.391  CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1238, failed to submit 4724
00:36:43.391  	 success 266, unsuccessful 972, failed 0
00:36:43.391   19:21:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:36:43.391   19:21:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:36:45.919  Initializing NVMe Controllers
00:36:45.919  Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn
00:36:45.919  Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0
00:36:45.919  Initialization complete. Launching workers.
00:36:45.919  NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30629, failed: 0
00:36:45.919  CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2597, failed to submit 28032
00:36:45.919  	 success 494, unsuccessful 2103, failed 0
00:36:45.919   19:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn
00:36:45.919   19:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:45.919   19:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:36:45.919   19:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:45.919   19:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target
00:36:45.919   19:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:45.919   19:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:36:46.487   19:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:46.487   19:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 131376
00:36:46.487   19:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 131376 ']'
00:36:46.487   19:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 131376
00:36:46.487    19:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname
00:36:46.487   19:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:36:46.487    19:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 131376
00:36:46.487  killing process with pid 131376
00:36:46.487   19:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:36:46.487   19:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:36:46.487   19:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 131376'
00:36:46.487   19:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 131376
00:36:46.487   19:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 131376
00:36:46.487  
00:36:46.487  real	0m10.722s
00:36:46.487  user	0m43.768s
00:36:46.487  sys	0m1.657s
00:36:46.487   19:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable
00:36:46.487  ************************************
00:36:46.487  END TEST spdk_target_abort
00:36:46.487   19:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:36:46.487  ************************************
00:36:46.746   19:21:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target
00:36:46.746   19:21:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:36:46.746   19:21:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable
00:36:46.746   19:21:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x
00:36:46.746  ************************************
00:36:46.746  START TEST kernel_target_abort
00:36:46.746  ************************************
00:36:46.746   19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target
00:36:46.746    19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip
00:36:46.746    19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip
00:36:46.746    19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=()
00:36:46.746    19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates
00:36:46.746    19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:36:46.746    19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:36:46.746    19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:36:46.746    19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:36:46.746    19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:36:46.746    19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:36:46.746    19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:36:46.746   19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1
00:36:46.746   19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1
00:36:46.746   19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet
00:36:46.746   19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn
00:36:46.746   19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1
00:36:46.746   19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1
00:36:46.746   19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme
00:36:46.746   19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]]
00:36:46.746   19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet
00:36:46.746   19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]]
00:36:46.746   19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:36:47.005  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:36:47.005  Waiting for block devices as requested
00:36:47.005  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:36:47.264  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:36:47.264   19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme*
00:36:47.264   19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]]
00:36:47.264   19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1
00:36:47.264   19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:36:47.264   19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:36:47.264   19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:36:47.264   19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1
00:36:47.264   19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt
00:36:47.264   19:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1
00:36:47.264  No valid GPT data, bailing
00:36:47.264    19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:36:47.264   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt=
00:36:47.264   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1
00:36:47.264   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1
00:36:47.264   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme*
00:36:47.264   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]]
00:36:47.264   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2
00:36:47.264   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2
00:36:47.264   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]]
00:36:47.264   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:36:47.264   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2
00:36:47.264   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt
00:36:47.264   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2
00:36:47.523  No valid GPT data, bailing
00:36:47.523    19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2
00:36:47.523   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt=
00:36:47.523   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1
00:36:47.523   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2
00:36:47.523   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme*
00:36:47.523   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]]
00:36:47.523   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3
00:36:47.523   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3
00:36:47.523   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]]
00:36:47.523   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:36:47.523   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3
00:36:47.523   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt
00:36:47.523   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3
00:36:47.523  No valid GPT data, bailing
00:36:47.523    19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3
00:36:47.523   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt=
00:36:47.523   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1
00:36:47.523   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3
00:36:47.523   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme*
00:36:47.523   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]]
00:36:47.523   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1
00:36:47.523   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]]
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1
00:36:47.524  No valid GPT data, bailing
00:36:47.524    19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt=
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]]
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a --hostid=8f716199-a3ae-4f70-9a3a-0556e5b7497a -a 10.0.0.1 -t tcp -s 4420
00:36:47.524  
00:36:47.524  Discovery Log Number of Records 2, Generation counter 2
00:36:47.524  =====Discovery Log Entry 0======
00:36:47.524  trtype:  tcp
00:36:47.524  adrfam:  ipv4
00:36:47.524  subtype: current discovery subsystem
00:36:47.524  treq:    not specified, sq flow control disable supported
00:36:47.524  portid:  1
00:36:47.524  trsvcid: 4420
00:36:47.524  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:36:47.524  traddr:  10.0.0.1
00:36:47.524  eflags:  none
00:36:47.524  sectype: none
00:36:47.524  =====Discovery Log Entry 1======
00:36:47.524  trtype:  tcp
00:36:47.524  adrfam:  ipv4
00:36:47.524  subtype: nvme subsystem
00:36:47.524  treq:    not specified, sq flow control disable supported
00:36:47.524  portid:  1
00:36:47.524  trsvcid: 4420
00:36:47.524  subnqn:  nqn.2016-06.io.spdk:testnqn
00:36:47.524  traddr:  10.0.0.1
00:36:47.524  eflags:  none
00:36:47.524  sectype: none
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64)
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4'
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1'
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420'
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:36:47.524   19:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:36:50.807  Initializing NVMe Controllers
00:36:50.807  Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn
00:36:50.807  Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0
00:36:50.807  Initialization complete. Launching workers.
00:36:50.807  NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33792, failed: 0
00:36:50.807  CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33792, failed to submit 0
00:36:50.807  	 success 0, unsuccessful 33792, failed 0
00:36:50.807   19:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:36:50.807   19:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:36:54.092  Initializing NVMe Controllers
00:36:54.092  Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn
00:36:54.092  Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0
00:36:54.092  Initialization complete. Launching workers.
00:36:54.092  NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65491, failed: 0
00:36:54.092  CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26782, failed to submit 38709
00:36:54.092  	 success 0, unsuccessful 26782, failed 0
00:36:54.092   19:21:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:36:54.092   19:21:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:36:57.377  Initializing NVMe Controllers
00:36:57.377  Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn
00:36:57.377  Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0
00:36:57.377  Initialization complete. Launching workers.
00:36:57.377  NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 70734, failed: 0
00:36:57.377  CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17660, failed to submit 53074
00:36:57.377  	 success 0, unsuccessful 17660, failed 0
00:36:57.377   19:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target
00:36:57.377   19:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]]
00:36:57.377   19:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0
00:36:57.377   19:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn
00:36:57.377   19:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1
00:36:57.377   19:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1
00:36:57.377   19:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn
00:36:57.377   19:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*)
00:36:57.377   19:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet
00:36:57.377   19:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:36:57.944  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:36:58.880  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:36:58.880  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:36:58.880  
00:36:58.880  real	0m12.153s
00:36:58.880  user	0m5.916s
00:36:58.880  sys	0m3.593s
00:36:58.880   19:21:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable
00:36:58.880  ************************************
00:36:58.880  END TEST kernel_target_abort
00:36:58.880  ************************************
00:36:58.880   19:21:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x
00:36:58.880   19:21:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:36:58.880   19:21:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini
00:36:58.880   19:21:30 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup
00:36:58.880   19:21:30 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync
00:36:58.880   19:21:30 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:36:58.880   19:21:30 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e
00:36:58.880   19:21:30 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20}
00:36:58.880   19:21:30 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:36:58.880  rmmod nvme_tcp
00:36:58.880  rmmod nvme_fabrics
00:36:58.880  rmmod nvme_keyring
00:36:58.880   19:21:30 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:36:58.880  Process with pid 131376 is not found
00:36:58.880   19:21:30 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e
00:36:58.880   19:21:30 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0
00:36:58.880   19:21:30 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 131376 ']'
00:36:58.880   19:21:30 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 131376
00:36:58.880   19:21:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 131376 ']'
00:36:58.880   19:21:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 131376
00:36:58.880  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (131376) - No such process
00:36:58.880   19:21:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 131376 is not found'
00:36:58.880   19:21:30 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']'
00:36:58.880   19:21:30 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:36:59.448  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:36:59.448  Waiting for block devices as requested
00:36:59.448  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:36:59.448  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:36:59.448   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:36:59.448   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:36:59.448   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr
00:36:59.448   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save
00:36:59.448   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:36:59.448   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore
00:36:59.448   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:36:59.448   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini
00:36:59.448   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster
00:36:59.448   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster
00:36:59.448   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster
00:36:59.707   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster
00:36:59.707   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down
00:36:59.707   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down
00:36:59.707   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down
00:36:59.707   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down
00:36:59.707   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge
00:36:59.707   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if
00:36:59.707   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2
00:36:59.707   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if
00:36:59.707   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2
00:36:59.707   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns
00:36:59.707   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:36:59.707   19:21:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:36:59.707    19:21:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:36:59.707   19:21:31 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0
00:36:59.707  
00:36:59.707  real	0m26.517s
00:36:59.707  user	0m50.962s
00:36:59.707  sys	0m6.752s
00:36:59.707  ************************************
00:36:59.707  END TEST nvmf_abort_qd_sizes
00:36:59.707  ************************************
00:36:59.707   19:21:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:36:59.707   19:21:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x
00:36:59.707   19:21:31  -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh
00:36:59.707   19:21:31  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:36:59.707   19:21:31  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:36:59.707   19:21:31  -- common/autotest_common.sh@10 -- # set +x
00:36:59.707  ************************************
00:36:59.707  START TEST keyring_file
00:36:59.707  ************************************
00:36:59.707   19:21:31 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh
00:36:59.966  * Looking for test storage...
00:36:59.966  * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring
00:36:59.966    19:21:31 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:36:59.966     19:21:31 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version
00:36:59.966     19:21:31 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:36:59.966    19:21:31 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:36:59.966    19:21:31 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:36:59.966    19:21:31 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l
00:36:59.966    19:21:31 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l
00:36:59.966    19:21:31 keyring_file -- scripts/common.sh@336 -- # IFS=.-:
00:36:59.966    19:21:31 keyring_file -- scripts/common.sh@336 -- # read -ra ver1
00:36:59.966    19:21:31 keyring_file -- scripts/common.sh@337 -- # IFS=.-:
00:36:59.966    19:21:31 keyring_file -- scripts/common.sh@337 -- # read -ra ver2
00:36:59.966    19:21:31 keyring_file -- scripts/common.sh@338 -- # local 'op=<'
00:36:59.966    19:21:31 keyring_file -- scripts/common.sh@340 -- # ver1_l=2
00:36:59.966    19:21:31 keyring_file -- scripts/common.sh@341 -- # ver2_l=1
00:36:59.966    19:21:31 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:36:59.966    19:21:31 keyring_file -- scripts/common.sh@344 -- # case "$op" in
00:36:59.966    19:21:31 keyring_file -- scripts/common.sh@345 -- # : 1
00:36:59.966    19:21:31 keyring_file -- scripts/common.sh@364 -- # (( v = 0 ))
00:36:59.966    19:21:31 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:36:59.966     19:21:31 keyring_file -- scripts/common.sh@365 -- # decimal 1
00:36:59.966     19:21:31 keyring_file -- scripts/common.sh@353 -- # local d=1
00:36:59.966     19:21:31 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:36:59.966     19:21:31 keyring_file -- scripts/common.sh@355 -- # echo 1
00:36:59.966    19:21:31 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1
00:36:59.966     19:21:31 keyring_file -- scripts/common.sh@366 -- # decimal 2
00:36:59.966     19:21:31 keyring_file -- scripts/common.sh@353 -- # local d=2
00:36:59.966     19:21:31 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:36:59.966     19:21:31 keyring_file -- scripts/common.sh@355 -- # echo 2
00:36:59.967    19:21:31 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2
00:36:59.967    19:21:31 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:36:59.967    19:21:31 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:36:59.967    19:21:31 keyring_file -- scripts/common.sh@368 -- # return 0
00:36:59.967    19:21:31 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:36:59.967    19:21:31 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:36:59.967  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:59.967  		--rc genhtml_branch_coverage=1
00:36:59.967  		--rc genhtml_function_coverage=1
00:36:59.967  		--rc genhtml_legend=1
00:36:59.967  		--rc geninfo_all_blocks=1
00:36:59.967  		--rc geninfo_unexecuted_blocks=1
00:36:59.967  		
00:36:59.967  		'
00:36:59.967    19:21:31 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:36:59.967  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:59.967  		--rc genhtml_branch_coverage=1
00:36:59.967  		--rc genhtml_function_coverage=1
00:36:59.967  		--rc genhtml_legend=1
00:36:59.967  		--rc geninfo_all_blocks=1
00:36:59.967  		--rc geninfo_unexecuted_blocks=1
00:36:59.967  		
00:36:59.967  		'
00:36:59.967    19:21:31 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:36:59.967  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:59.967  		--rc genhtml_branch_coverage=1
00:36:59.967  		--rc genhtml_function_coverage=1
00:36:59.967  		--rc genhtml_legend=1
00:36:59.967  		--rc geninfo_all_blocks=1
00:36:59.967  		--rc geninfo_unexecuted_blocks=1
00:36:59.967  		
00:36:59.967  		'
00:36:59.967    19:21:31 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:36:59.967  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:59.967  		--rc genhtml_branch_coverage=1
00:36:59.967  		--rc genhtml_function_coverage=1
00:36:59.967  		--rc genhtml_legend=1
00:36:59.967  		--rc geninfo_all_blocks=1
00:36:59.967  		--rc geninfo_unexecuted_blocks=1
00:36:59.967  		
00:36:59.967  		'
00:36:59.967   19:21:31 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh
00:36:59.967    19:21:31 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:36:59.967      19:21:31 keyring_file -- nvmf/common.sh@7 -- # uname -s
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:36:59.967      19:21:31 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:36:59.967      19:21:31 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob
00:36:59.967      19:21:31 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:36:59.967      19:21:31 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:36:59.967      19:21:31 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:36:59.967       19:21:31 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:59.967       19:21:31 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:59.967       19:21:31 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:59.967       19:21:31 keyring_file -- paths/export.sh@5 -- # export PATH
00:36:59.967       19:21:31 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@51 -- # : 0
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:36:59.967  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:36:59.967     19:21:31 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0
00:36:59.967    19:21:31 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock
00:36:59.967   19:21:31 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0
00:36:59.967   19:21:31 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0
00:36:59.967   19:21:31 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff
00:36:59.967   19:21:31 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00
00:36:59.967   19:21:31 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT
00:36:59.967    19:21:31 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0
00:36:59.967    19:21:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path
00:36:59.967    19:21:31 keyring_file -- keyring/common.sh@17 -- # name=key0
00:36:59.967    19:21:31 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff
00:36:59.967    19:21:31 keyring_file -- keyring/common.sh@17 -- # digest=0
00:36:59.967     19:21:31 keyring_file -- keyring/common.sh@18 -- # mktemp
00:36:59.967    19:21:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.6CLfX0lWlt
00:36:59.967    19:21:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0
00:36:59.967    19:21:31 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0
00:36:59.967    19:21:31 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest
00:36:59.967    19:21:31 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:36:59.967    19:21:31 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff
00:36:59.967    19:21:31 keyring_file -- nvmf/common.sh@732 -- # digest=0
00:36:59.967    19:21:31 keyring_file -- nvmf/common.sh@733 -- # python -
00:37:00.226    19:21:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6CLfX0lWlt
00:37:00.226    19:21:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.6CLfX0lWlt
00:37:00.226   19:21:31 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.6CLfX0lWlt
00:37:00.226    19:21:31 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0
00:37:00.226    19:21:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path
00:37:00.226    19:21:31 keyring_file -- keyring/common.sh@17 -- # name=key1
00:37:00.226    19:21:31 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00
00:37:00.226    19:21:31 keyring_file -- keyring/common.sh@17 -- # digest=0
00:37:00.226     19:21:31 keyring_file -- keyring/common.sh@18 -- # mktemp
00:37:00.226    19:21:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.UifDQ44S7j
00:37:00.226    19:21:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0
00:37:00.226    19:21:31 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0
00:37:00.226    19:21:31 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest
00:37:00.226    19:21:31 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:37:00.226    19:21:31 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00
00:37:00.226    19:21:31 keyring_file -- nvmf/common.sh@732 -- # digest=0
00:37:00.226    19:21:31 keyring_file -- nvmf/common.sh@733 -- # python -
00:37:00.226    19:21:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.UifDQ44S7j
00:37:00.226    19:21:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.UifDQ44S7j
00:37:00.226  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:37:00.226   19:21:31 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.UifDQ44S7j
00:37:00.226   19:21:31 keyring_file -- keyring/file.sh@30 -- # tgtpid=132284
00:37:00.226   19:21:31 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:37:00.226   19:21:31 keyring_file -- keyring/file.sh@32 -- # waitforlisten 132284
00:37:00.226   19:21:31 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 132284 ']'
00:37:00.226   19:21:31 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:37:00.226   19:21:31 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100
00:37:00.226   19:21:31 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:37:00.226   19:21:31 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable
00:37:00.226   19:21:31 keyring_file -- common/autotest_common.sh@10 -- # set +x
00:37:00.226  [2024-12-13 19:21:31.963099] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:37:00.226  [2024-12-13 19:21:31.963469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132284 ]
00:37:00.485  [2024-12-13 19:21:32.117542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:37:00.485  [2024-12-13 19:21:32.160908] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@868 -- # return 0
00:37:00.744   19:21:32 keyring_file -- keyring/file.sh@33 -- # rpc_cmd
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@10 -- # set +x
00:37:00.744  [2024-12-13 19:21:32.473905] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:37:00.744  null0
00:37:00.744  [2024-12-13 19:21:32.505862] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:37:00.744  [2024-12-13 19:21:32.506036] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:37:00.744   19:21:32 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@652 -- # local es=0
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:37:00.744    19:21:32 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@10 -- # set +x
00:37:00.744  [2024-12-13 19:21:32.537848] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists
00:37:00.744  2024/12/13 19:21:32 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters
00:37:00.744  request:
00:37:00.744  {
00:37:00.744  "method": "nvmf_subsystem_add_listener",
00:37:00.744  "params": {
00:37:00.744  "nqn": "nqn.2016-06.io.spdk:cnode0",
00:37:00.744  "secure_channel": false,
00:37:00.744  "listen_address": {
00:37:00.744  "trtype": "tcp",
00:37:00.744  "traddr": "127.0.0.1",
00:37:00.744  "trsvcid": "4420"
00:37:00.744  }
00:37:00.744  }
00:37:00.744  }
00:37:00.744  Got JSON-RPC error response
00:37:00.744  GoRPCClient: error on JSON-RPC call
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@655 -- # es=1
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:37:00.744   19:21:32 keyring_file -- keyring/file.sh@47 -- # bperfpid=132311
00:37:00.744   19:21:32 keyring_file -- keyring/file.sh@49 -- # waitforlisten 132311 /var/tmp/bperf.sock
00:37:00.744   19:21:32 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 132311 ']'
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:37:00.744  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable
00:37:00.744   19:21:32 keyring_file -- common/autotest_common.sh@10 -- # set +x
00:37:01.019  [2024-12-13 19:21:32.598509] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:37:01.019  [2024-12-13 19:21:32.598814] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132311 ]
00:37:01.019  [2024-12-13 19:21:32.746314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:37:01.019  [2024-12-13 19:21:32.784946] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:37:01.297   19:21:32 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:37:01.297   19:21:32 keyring_file -- common/autotest_common.sh@868 -- # return 0
00:37:01.297   19:21:32 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6CLfX0lWlt
00:37:01.297   19:21:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6CLfX0lWlt
00:37:01.556   19:21:33 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.UifDQ44S7j
00:37:01.556   19:21:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.UifDQ44S7j
00:37:01.814    19:21:33 keyring_file -- keyring/file.sh@52 -- # get_key key0
00:37:01.814    19:21:33 keyring_file -- keyring/file.sh@52 -- # jq -r .path
00:37:01.814    19:21:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:37:01.814    19:21:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:37:01.814    19:21:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:37:02.073   19:21:33 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.6CLfX0lWlt == \/\t\m\p\/\t\m\p\.\6\C\L\f\X\0\l\W\l\t ]]
00:37:02.073    19:21:33 keyring_file -- keyring/file.sh@53 -- # get_key key1
00:37:02.073    19:21:33 keyring_file -- keyring/file.sh@53 -- # jq -r .path
00:37:02.073    19:21:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:37:02.073    19:21:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")'
00:37:02.073    19:21:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:37:02.332   19:21:34 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.UifDQ44S7j == \/\t\m\p\/\t\m\p\.\U\i\f\D\Q\4\4\S\7\j ]]
00:37:02.332    19:21:34 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0
00:37:02.332    19:21:34 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:37:02.332    19:21:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:37:02.332    19:21:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:37:02.332    19:21:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:37:02.332    19:21:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:37:02.591   19:21:34 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 ))
00:37:02.591    19:21:34 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1
00:37:02.591    19:21:34 keyring_file -- keyring/common.sh@12 -- # get_key key1
00:37:02.591    19:21:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:37:02.591    19:21:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:37:02.591    19:21:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:37:02.591    19:21:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")'
00:37:02.850   19:21:34 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 ))
00:37:02.850   19:21:34 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:37:02.850   19:21:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:37:03.109  [2024-12-13 19:21:34.839108] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:37:03.109  nvme0n1
00:37:03.369    19:21:34 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0
00:37:03.369    19:21:34 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:37:03.369    19:21:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:37:03.369    19:21:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:37:03.369    19:21:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:37:03.369    19:21:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:37:03.628   19:21:35 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 ))
00:37:03.628    19:21:35 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1
00:37:03.628    19:21:35 keyring_file -- keyring/common.sh@12 -- # get_key key1
00:37:03.628    19:21:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:37:03.628    19:21:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:37:03.628    19:21:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")'
00:37:03.628    19:21:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:37:03.886   19:21:35 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 ))
00:37:03.886   19:21:35 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:37:03.886  Running I/O for 1 seconds...
00:37:05.262      13443.00 IOPS,    52.51 MiB/s
00:37:05.262                                                                                                  Latency(us)
00:37:05.262  
[2024-12-13T19:21:37.086Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:37:05.262  Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096)
00:37:05.262  	 nvme0n1             :       1.01   13489.57      52.69       0.00     0.00    9463.94    4408.79   21448.15
00:37:05.262  
[2024-12-13T19:21:37.086Z]  ===================================================================================================================
00:37:05.262  
[2024-12-13T19:21:37.086Z]  Total                       :              13489.57      52.69       0.00     0.00    9463.94    4408.79   21448.15
00:37:05.262  {
00:37:05.262    "results": [
00:37:05.262      {
00:37:05.262        "job": "nvme0n1",
00:37:05.262        "core_mask": "0x2",
00:37:05.262        "workload": "randrw",
00:37:05.262        "percentage": 50,
00:37:05.262        "status": "finished",
00:37:05.262        "queue_depth": 128,
00:37:05.262        "io_size": 4096,
00:37:05.262        "runtime": 1.006111,
00:37:05.262        "iops": 13489.565266655469,
00:37:05.262        "mibps": 52.693614322872925,
00:37:05.262        "io_failed": 0,
00:37:05.262        "io_timeout": 0,
00:37:05.262        "avg_latency_us": 9463.94391876323,
00:37:05.262        "min_latency_us": 4408.785454545455,
00:37:05.263        "max_latency_us": 21448.145454545454
00:37:05.263      }
00:37:05.263    ],
00:37:05.263    "core_count": 1
00:37:05.263  }
00:37:05.263   19:21:36 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0
00:37:05.263   19:21:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0
00:37:05.263    19:21:36 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0
00:37:05.263    19:21:36 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:37:05.263    19:21:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:37:05.263    19:21:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:37:05.263    19:21:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:37:05.263    19:21:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:37:05.521   19:21:37 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 ))
00:37:05.521    19:21:37 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1
00:37:05.521    19:21:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:37:05.521    19:21:37 keyring_file -- keyring/common.sh@12 -- # get_key key1
00:37:05.521    19:21:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:37:05.521    19:21:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:37:05.521    19:21:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")'
00:37:05.780   19:21:37 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 ))
00:37:05.780   19:21:37 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1
00:37:05.780   19:21:37 keyring_file -- common/autotest_common.sh@652 -- # local es=0
00:37:05.780   19:21:37 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1
00:37:05.780   19:21:37 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd
00:37:05.780   19:21:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:37:05.780    19:21:37 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd
00:37:05.780   19:21:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:37:05.780   19:21:37 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1
00:37:05.780   19:21:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1
00:37:06.038  [2024-12-13 19:21:37.850054] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected
00:37:06.038  [2024-12-13 19:21:37.850830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x785450 (107): Transport endpoint is not connected
00:37:06.038  [2024-12-13 19:21:37.851818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x785450 (9): Bad file descriptor
00:37:06.038  [2024-12-13 19:21:37.852814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state
00:37:06.038  [2024-12-13 19:21:37.852851] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1
00:37:06.038  [2024-12-13 19:21:37.852876] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted
00:37:06.038  [2024-12-13 19:21:37.852886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state.
00:37:06.038  2024/12/13 19:21:37 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error
00:37:06.038  request:
00:37:06.038  {
00:37:06.038    "method": "bdev_nvme_attach_controller",
00:37:06.038    "params": {
00:37:06.038      "name": "nvme0",
00:37:06.038      "trtype": "tcp",
00:37:06.038      "traddr": "127.0.0.1",
00:37:06.038      "adrfam": "ipv4",
00:37:06.038      "trsvcid": "4420",
00:37:06.038      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:37:06.038      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:37:06.038      "prchk_reftag": false,
00:37:06.038      "prchk_guard": false,
00:37:06.038      "hdgst": false,
00:37:06.039      "ddgst": false,
00:37:06.039      "psk": "key1",
00:37:06.039      "allow_unrecognized_csi": false
00:37:06.039    }
00:37:06.039  }
00:37:06.039  Got JSON-RPC error response
00:37:06.039  GoRPCClient: error on JSON-RPC call
00:37:06.297   19:21:37 keyring_file -- common/autotest_common.sh@655 -- # es=1
00:37:06.297   19:21:37 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:37:06.297   19:21:37 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:37:06.297   19:21:37 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:37:06.297    19:21:37 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0
00:37:06.297    19:21:37 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:37:06.297    19:21:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:37:06.297    19:21:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:37:06.297    19:21:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:37:06.297    19:21:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:37:06.556   19:21:38 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 ))
00:37:06.556    19:21:38 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1
00:37:06.556    19:21:38 keyring_file -- keyring/common.sh@12 -- # get_key key1
00:37:06.556    19:21:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:37:06.556    19:21:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")'
00:37:06.556    19:21:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:37:06.556    19:21:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:37:06.814   19:21:38 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 ))
00:37:06.815   19:21:38 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0
00:37:06.815   19:21:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0
00:37:06.815   19:21:38 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1
00:37:06.815   19:21:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1
00:37:07.073    19:21:38 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys
00:37:07.073    19:21:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:37:07.073    19:21:38 keyring_file -- keyring/file.sh@78 -- # jq length
00:37:07.332   19:21:39 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 ))
00:37:07.332   19:21:39 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.6CLfX0lWlt
00:37:07.332   19:21:39 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.6CLfX0lWlt
00:37:07.332   19:21:39 keyring_file -- common/autotest_common.sh@652 -- # local es=0
00:37:07.332   19:21:39 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.6CLfX0lWlt
00:37:07.332   19:21:39 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd
00:37:07.332   19:21:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:37:07.332    19:21:39 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd
00:37:07.332   19:21:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:37:07.332   19:21:39 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6CLfX0lWlt
00:37:07.332   19:21:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6CLfX0lWlt
00:37:07.590  [2024-12-13 19:21:39.285975] keyring.c:  36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.6CLfX0lWlt': 0100660
00:37:07.590  [2024-12-13 19:21:39.286016] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring
00:37:07.590  2024/12/13 19:21:39 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.6CLfX0lWlt], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted
00:37:07.590  request:
00:37:07.590  {
00:37:07.590    "method": "keyring_file_add_key",
00:37:07.590    "params": {
00:37:07.590      "name": "key0",
00:37:07.590      "path": "/tmp/tmp.6CLfX0lWlt"
00:37:07.590    }
00:37:07.590  }
00:37:07.590  Got JSON-RPC error response
00:37:07.590  GoRPCClient: error on JSON-RPC call
00:37:07.590   19:21:39 keyring_file -- common/autotest_common.sh@655 -- # es=1
00:37:07.590   19:21:39 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:37:07.590   19:21:39 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:37:07.590   19:21:39 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:37:07.590   19:21:39 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.6CLfX0lWlt
00:37:07.590   19:21:39 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6CLfX0lWlt
00:37:07.590   19:21:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6CLfX0lWlt
00:37:07.849   19:21:39 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.6CLfX0lWlt
00:37:07.849    19:21:39 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0
00:37:07.849    19:21:39 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:37:07.849    19:21:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:37:07.849    19:21:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:37:07.849    19:21:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:37:07.849    19:21:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:37:08.107   19:21:39 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 ))
00:37:08.107   19:21:39 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:37:08.107   19:21:39 keyring_file -- common/autotest_common.sh@652 -- # local es=0
00:37:08.107   19:21:39 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:37:08.107   19:21:39 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd
00:37:08.107   19:21:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:37:08.107    19:21:39 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd
00:37:08.107   19:21:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:37:08.107   19:21:39 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:37:08.107   19:21:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:37:08.366  [2024-12-13 19:21:40.122191] keyring.c:  31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.6CLfX0lWlt': No such file or directory
00:37:08.366  [2024-12-13 19:21:40.122271] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory
00:37:08.366  [2024-12-13 19:21:40.122294] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1
00:37:08.366  [2024-12-13 19:21:40.122304] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device
00:37:08.366  [2024-12-13 19:21:40.122315] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:37:08.366  [2024-12-13 19:21:40.122324] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1)
00:37:08.366  2024/12/13 19:21:40 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device
00:37:08.366  request:
00:37:08.366  {
00:37:08.366    "method": "bdev_nvme_attach_controller",
00:37:08.366    "params": {
00:37:08.366      "name": "nvme0",
00:37:08.366      "trtype": "tcp",
00:37:08.366      "traddr": "127.0.0.1",
00:37:08.366      "adrfam": "ipv4",
00:37:08.366      "trsvcid": "4420",
00:37:08.366      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:37:08.366      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:37:08.366      "prchk_reftag": false,
00:37:08.366      "prchk_guard": false,
00:37:08.366      "hdgst": false,
00:37:08.366      "ddgst": false,
00:37:08.366      "psk": "key0",
00:37:08.366      "allow_unrecognized_csi": false
00:37:08.366    }
00:37:08.366  }
00:37:08.366  Got JSON-RPC error response
00:37:08.366  GoRPCClient: error on JSON-RPC call
00:37:08.366   19:21:40 keyring_file -- common/autotest_common.sh@655 -- # es=1
00:37:08.366   19:21:40 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:37:08.366   19:21:40 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:37:08.366   19:21:40 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:37:08.366   19:21:40 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0
00:37:08.366   19:21:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0
00:37:08.625    19:21:40 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0
00:37:08.625    19:21:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path
00:37:08.625    19:21:40 keyring_file -- keyring/common.sh@17 -- # name=key0
00:37:08.625    19:21:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff
00:37:08.625    19:21:40 keyring_file -- keyring/common.sh@17 -- # digest=0
00:37:08.625     19:21:40 keyring_file -- keyring/common.sh@18 -- # mktemp
00:37:08.625    19:21:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.KpGcAwZr6Q
00:37:08.625    19:21:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0
00:37:08.625    19:21:40 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0
00:37:08.625    19:21:40 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest
00:37:08.625    19:21:40 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:37:08.625    19:21:40 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff
00:37:08.625    19:21:40 keyring_file -- nvmf/common.sh@732 -- # digest=0
00:37:08.625    19:21:40 keyring_file -- nvmf/common.sh@733 -- # python -
00:37:08.625    19:21:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.KpGcAwZr6Q
00:37:08.625    19:21:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.KpGcAwZr6Q
00:37:08.625   19:21:40 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.KpGcAwZr6Q
00:37:08.625   19:21:40 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KpGcAwZr6Q
00:37:08.625   19:21:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KpGcAwZr6Q
00:37:08.884   19:21:40 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:37:08.884   19:21:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:37:09.142  nvme0n1
00:37:09.142    19:21:40 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0
00:37:09.142    19:21:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:37:09.142    19:21:40 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:37:09.142    19:21:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:37:09.142    19:21:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:37:09.142    19:21:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:37:09.709   19:21:41 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 ))
00:37:09.709   19:21:41 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0
00:37:09.709   19:21:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0
00:37:09.709    19:21:41 keyring_file -- keyring/file.sh@102 -- # get_key key0
00:37:09.709    19:21:41 keyring_file -- keyring/file.sh@102 -- # jq -r .removed
00:37:09.709    19:21:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:37:09.709    19:21:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:37:09.709    19:21:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:37:09.968   19:21:41 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]]
00:37:09.968    19:21:41 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0
00:37:09.968    19:21:41 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:37:09.968    19:21:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:37:09.968    19:21:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:37:09.968    19:21:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:37:09.968    19:21:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:37:10.227   19:21:41 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 ))
00:37:10.227   19:21:41 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0
00:37:10.227   19:21:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0
00:37:10.794    19:21:42 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys
00:37:10.794    19:21:42 keyring_file -- keyring/file.sh@105 -- # jq length
00:37:10.794    19:21:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:37:10.794   19:21:42 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 ))
00:37:10.794   19:21:42 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KpGcAwZr6Q
00:37:10.794   19:21:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KpGcAwZr6Q
00:37:11.053   19:21:42 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.UifDQ44S7j
00:37:11.053   19:21:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.UifDQ44S7j
00:37:11.311   19:21:43 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:37:11.311   19:21:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:37:11.570  nvme0n1
00:37:11.570    19:21:43 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config
00:37:11.570    19:21:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config
00:37:12.137   19:21:43 keyring_file -- keyring/file.sh@113 -- # config='{
00:37:12.137    "subsystems": [
00:37:12.137      {
00:37:12.137        "subsystem": "keyring",
00:37:12.137        "config": [
00:37:12.137          {
00:37:12.137            "method": "keyring_file_add_key",
00:37:12.137            "params": {
00:37:12.137              "name": "key0",
00:37:12.137              "path": "/tmp/tmp.KpGcAwZr6Q"
00:37:12.137            }
00:37:12.137          },
00:37:12.137          {
00:37:12.137            "method": "keyring_file_add_key",
00:37:12.137            "params": {
00:37:12.137              "name": "key1",
00:37:12.137              "path": "/tmp/tmp.UifDQ44S7j"
00:37:12.137            }
00:37:12.137          }
00:37:12.137        ]
00:37:12.137      },
00:37:12.137      {
00:37:12.137        "subsystem": "iobuf",
00:37:12.137        "config": [
00:37:12.137          {
00:37:12.137            "method": "iobuf_set_options",
00:37:12.137            "params": {
00:37:12.137              "enable_numa": false,
00:37:12.137              "large_bufsize": 135168,
00:37:12.137              "large_pool_count": 1024,
00:37:12.137              "small_bufsize": 8192,
00:37:12.137              "small_pool_count": 8192
00:37:12.137            }
00:37:12.137          }
00:37:12.137        ]
00:37:12.137      },
00:37:12.137      {
00:37:12.137        "subsystem": "sock",
00:37:12.137        "config": [
00:37:12.137          {
00:37:12.137            "method": "sock_set_default_impl",
00:37:12.137            "params": {
00:37:12.137              "impl_name": "posix"
00:37:12.137            }
00:37:12.137          },
00:37:12.137          {
00:37:12.137            "method": "sock_impl_set_options",
00:37:12.137            "params": {
00:37:12.137              "enable_ktls": false,
00:37:12.137              "enable_placement_id": 0,
00:37:12.137              "enable_quickack": false,
00:37:12.137              "enable_recv_pipe": true,
00:37:12.137              "enable_zerocopy_send_client": false,
00:37:12.137              "enable_zerocopy_send_server": true,
00:37:12.137              "impl_name": "ssl",
00:37:12.137              "recv_buf_size": 4096,
00:37:12.137              "send_buf_size": 4096,
00:37:12.137              "tls_version": 0,
00:37:12.137              "zerocopy_threshold": 0
00:37:12.137            }
00:37:12.137          },
00:37:12.137          {
00:37:12.137            "method": "sock_impl_set_options",
00:37:12.137            "params": {
00:37:12.137              "enable_ktls": false,
00:37:12.137              "enable_placement_id": 0,
00:37:12.137              "enable_quickack": false,
00:37:12.137              "enable_recv_pipe": true,
00:37:12.137              "enable_zerocopy_send_client": false,
00:37:12.137              "enable_zerocopy_send_server": true,
00:37:12.137              "impl_name": "posix",
00:37:12.137              "recv_buf_size": 2097152,
00:37:12.137              "send_buf_size": 2097152,
00:37:12.137              "tls_version": 0,
00:37:12.137              "zerocopy_threshold": 0
00:37:12.137            }
00:37:12.137          }
00:37:12.137        ]
00:37:12.137      },
00:37:12.137      {
00:37:12.137        "subsystem": "vmd",
00:37:12.137        "config": []
00:37:12.137      },
00:37:12.137      {
00:37:12.137        "subsystem": "accel",
00:37:12.137        "config": [
00:37:12.137          {
00:37:12.137            "method": "accel_set_options",
00:37:12.137            "params": {
00:37:12.137              "buf_count": 2048,
00:37:12.137              "large_cache_size": 16,
00:37:12.137              "sequence_count": 2048,
00:37:12.137              "small_cache_size": 128,
00:37:12.137              "task_count": 2048
00:37:12.137            }
00:37:12.137          }
00:37:12.137        ]
00:37:12.137      },
00:37:12.137      {
00:37:12.137        "subsystem": "bdev",
00:37:12.137        "config": [
00:37:12.137          {
00:37:12.137            "method": "bdev_set_options",
00:37:12.137            "params": {
00:37:12.137              "bdev_auto_examine": true,
00:37:12.137              "bdev_io_cache_size": 256,
00:37:12.137              "bdev_io_pool_size": 65535,
00:37:12.137              "iobuf_large_cache_size": 16,
00:37:12.137              "iobuf_small_cache_size": 128
00:37:12.138            }
00:37:12.138          },
00:37:12.138          {
00:37:12.138            "method": "bdev_raid_set_options",
00:37:12.138            "params": {
00:37:12.138              "process_max_bandwidth_mb_sec": 0,
00:37:12.138              "process_window_size_kb": 1024
00:37:12.138            }
00:37:12.138          },
00:37:12.138          {
00:37:12.138            "method": "bdev_iscsi_set_options",
00:37:12.138            "params": {
00:37:12.138              "timeout_sec": 30
00:37:12.138            }
00:37:12.138          },
00:37:12.138          {
00:37:12.138            "method": "bdev_nvme_set_options",
00:37:12.138            "params": {
00:37:12.138              "action_on_timeout": "none",
00:37:12.138              "allow_accel_sequence": false,
00:37:12.138              "arbitration_burst": 0,
00:37:12.138              "bdev_retry_count": 3,
00:37:12.138              "ctrlr_loss_timeout_sec": 0,
00:37:12.138              "delay_cmd_submit": true,
00:37:12.138              "dhchap_dhgroups": [
00:37:12.138                "null",
00:37:12.138                "ffdhe2048",
00:37:12.138                "ffdhe3072",
00:37:12.138                "ffdhe4096",
00:37:12.138                "ffdhe6144",
00:37:12.138                "ffdhe8192"
00:37:12.138              ],
00:37:12.138              "dhchap_digests": [
00:37:12.138                "sha256",
00:37:12.138                "sha384",
00:37:12.138                "sha512"
00:37:12.138              ],
00:37:12.138              "disable_auto_failback": false,
00:37:12.138              "fast_io_fail_timeout_sec": 0,
00:37:12.138              "generate_uuids": false,
00:37:12.138              "high_priority_weight": 0,
00:37:12.138              "io_path_stat": false,
00:37:12.138              "io_queue_requests": 512,
00:37:12.138              "keep_alive_timeout_ms": 10000,
00:37:12.138              "low_priority_weight": 0,
00:37:12.138              "medium_priority_weight": 0,
00:37:12.138              "nvme_adminq_poll_period_us": 10000,
00:37:12.138              "nvme_error_stat": false,
00:37:12.138              "nvme_ioq_poll_period_us": 0,
00:37:12.138              "rdma_cm_event_timeout_ms": 0,
00:37:12.138              "rdma_max_cq_size": 0,
00:37:12.138              "rdma_srq_size": 0,
00:37:12.138              "rdma_umr_per_io": false,
00:37:12.138              "reconnect_delay_sec": 0,
00:37:12.138              "timeout_admin_us": 0,
00:37:12.138              "timeout_us": 0,
00:37:12.138              "transport_ack_timeout": 0,
00:37:12.138              "transport_retry_count": 4,
00:37:12.138              "transport_tos": 0
00:37:12.138            }
00:37:12.138          },
00:37:12.138          {
00:37:12.138            "method": "bdev_nvme_attach_controller",
00:37:12.138            "params": {
00:37:12.138              "adrfam": "IPv4",
00:37:12.138              "ctrlr_loss_timeout_sec": 0,
00:37:12.138              "ddgst": false,
00:37:12.138              "fast_io_fail_timeout_sec": 0,
00:37:12.138              "hdgst": false,
00:37:12.138              "hostnqn": "nqn.2016-06.io.spdk:host0",
00:37:12.138              "multipath": "multipath",
00:37:12.138              "name": "nvme0",
00:37:12.138              "prchk_guard": false,
00:37:12.138              "prchk_reftag": false,
00:37:12.138              "psk": "key0",
00:37:12.138              "reconnect_delay_sec": 0,
00:37:12.138              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:37:12.138              "traddr": "127.0.0.1",
00:37:12.138              "trsvcid": "4420",
00:37:12.138              "trtype": "TCP"
00:37:12.138            }
00:37:12.138          },
00:37:12.138          {
00:37:12.138            "method": "bdev_nvme_set_hotplug",
00:37:12.138            "params": {
00:37:12.138              "enable": false,
00:37:12.138              "period_us": 100000
00:37:12.138            }
00:37:12.138          },
00:37:12.138          {
00:37:12.138            "method": "bdev_wait_for_examine"
00:37:12.138          }
00:37:12.138        ]
00:37:12.138      },
00:37:12.138      {
00:37:12.138        "subsystem": "nbd",
00:37:12.138        "config": []
00:37:12.138      }
00:37:12.138    ]
00:37:12.138  }'
00:37:12.138   19:21:43 keyring_file -- keyring/file.sh@115 -- # killprocess 132311
00:37:12.138   19:21:43 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 132311 ']'
00:37:12.138   19:21:43 keyring_file -- common/autotest_common.sh@958 -- # kill -0 132311
00:37:12.138    19:21:43 keyring_file -- common/autotest_common.sh@959 -- # uname
00:37:12.138   19:21:43 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:37:12.138    19:21:43 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 132311
00:37:12.138  killing process with pid 132311
00:37:12.138  Received shutdown signal, test time was about 1.000000 seconds
00:37:12.138  
00:37:12.138                                                                                                  Latency(us)
00:37:12.138  
[2024-12-13T19:21:43.962Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:37:12.138  
[2024-12-13T19:21:43.962Z]  ===================================================================================================================
00:37:12.138  
[2024-12-13T19:21:43.962Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:37:12.138   19:21:43 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:37:12.138   19:21:43 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:37:12.138   19:21:43 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 132311'
00:37:12.138   19:21:43 keyring_file -- common/autotest_common.sh@973 -- # kill 132311
00:37:12.138   19:21:43 keyring_file -- common/autotest_common.sh@978 -- # wait 132311
00:37:12.138  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:37:12.138   19:21:43 keyring_file -- keyring/file.sh@118 -- # bperfpid=132765
00:37:12.138   19:21:43 keyring_file -- keyring/file.sh@120 -- # waitforlisten 132765 /var/tmp/bperf.sock
00:37:12.138   19:21:43 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 132765 ']'
00:37:12.138   19:21:43 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:37:12.138   19:21:43 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100
00:37:12.138   19:21:43 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63
00:37:12.138   19:21:43 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:37:12.138   19:21:43 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable
00:37:12.138    19:21:43 keyring_file -- keyring/file.sh@116 -- # echo '{
00:37:12.138    "subsystems": [
00:37:12.138      {
00:37:12.138        "subsystem": "keyring",
00:37:12.138        "config": [
00:37:12.138          {
00:37:12.138            "method": "keyring_file_add_key",
00:37:12.138            "params": {
00:37:12.138              "name": "key0",
00:37:12.138              "path": "/tmp/tmp.KpGcAwZr6Q"
00:37:12.138            }
00:37:12.138          },
00:37:12.138          {
00:37:12.138            "method": "keyring_file_add_key",
00:37:12.138            "params": {
00:37:12.138              "name": "key1",
00:37:12.138              "path": "/tmp/tmp.UifDQ44S7j"
00:37:12.138            }
00:37:12.138          }
00:37:12.138        ]
00:37:12.138      },
00:37:12.138      {
00:37:12.138        "subsystem": "iobuf",
00:37:12.138        "config": [
00:37:12.138          {
00:37:12.138            "method": "iobuf_set_options",
00:37:12.138            "params": {
00:37:12.138              "enable_numa": false,
00:37:12.138              "large_bufsize": 135168,
00:37:12.138              "large_pool_count": 1024,
00:37:12.138              "small_bufsize": 8192,
00:37:12.138              "small_pool_count": 8192
00:37:12.138            }
00:37:12.138          }
00:37:12.138        ]
00:37:12.138      },
00:37:12.138      {
00:37:12.138        "subsystem": "sock",
00:37:12.138        "config": [
00:37:12.138          {
00:37:12.138            "method": "sock_set_default_impl",
00:37:12.138            "params": {
00:37:12.138              "impl_name": "posix"
00:37:12.138            }
00:37:12.138          },
00:37:12.138          {
00:37:12.138            "method": "sock_impl_set_options",
00:37:12.138            "params": {
00:37:12.138              "enable_ktls": false,
00:37:12.138              "enable_placement_id": 0,
00:37:12.138              "enable_quickack": false,
00:37:12.138              "enable_recv_pipe": true,
00:37:12.138              "enable_zerocopy_send_client": false,
00:37:12.138              "enable_zerocopy_send_server": true,
00:37:12.138              "impl_name": "ssl",
00:37:12.138              "recv_buf_size": 4096,
00:37:12.138              "send_buf_size": 4096,
00:37:12.138              "tls_version": 0,
00:37:12.138              "zerocopy_threshold": 0
00:37:12.138            }
00:37:12.138          },
00:37:12.138          {
00:37:12.138            "method": "sock_impl_set_options",
00:37:12.138            "params": {
00:37:12.138              "enable_ktls": false,
00:37:12.139              "enable_placement_id": 0,
00:37:12.139              "enable_quickack": false,
00:37:12.139              "enable_recv_pipe": true,
00:37:12.139              "enable_zerocopy_send_client": false,
00:37:12.139              "enable_zerocopy_send_server": true,
00:37:12.139              "impl_name": "posix",
00:37:12.139              "recv_buf_size": 2097152,
00:37:12.139              "send_buf_size": 2097152,
00:37:12.139              "tls_version": 0,
00:37:12.139              "zerocopy_threshold": 0
00:37:12.139            }
00:37:12.139          }
00:37:12.139        ]
00:37:12.139      },
00:37:12.139      {
00:37:12.139        "subsystem": "vmd",
00:37:12.139        "config": []
00:37:12.139      },
00:37:12.139      {
00:37:12.139        "subsystem": "accel",
00:37:12.139        "config": [
00:37:12.139          {
00:37:12.139            "method": "accel_set_options",
00:37:12.139            "params": {
00:37:12.139              "buf_count": 2048,
00:37:12.139              "large_cache_size": 16,
00:37:12.139              "sequence_count": 2048,
00:37:12.139              "small_cache_size": 128,
00:37:12.139              "task_count": 2048
00:37:12.139            }
00:37:12.139          }
00:37:12.139        ]
00:37:12.139      },
00:37:12.139      {
00:37:12.139        "subsystem": "bdev",
00:37:12.139        "config": [
00:37:12.139          {
00:37:12.139            "method": "bdev_set_options",
00:37:12.139            "params": {
00:37:12.139              "bdev_auto_examine": true,
00:37:12.139              "bdev_io_cache_size": 256,
00:37:12.139              "bdev_io_pool_size": 65535,
00:37:12.139              "iobuf_large_cache_size": 16,
00:37:12.139              "iobuf_small_cache_size": 128
00:37:12.139            }
00:37:12.139          },
00:37:12.139          {
00:37:12.139            "method": "bdev_raid_set_options",
00:37:12.139            "params": {
00:37:12.139              "process_max_bandwidth_mb_sec": 0,
00:37:12.139              "process_window_size_kb": 1024
00:37:12.139            }
00:37:12.139          },
00:37:12.139          {
00:37:12.139            "method": "bdev_iscsi_set_options",
00:37:12.139            "params": {
00:37:12.139              "timeout_sec": 30
00:37:12.139            }
00:37:12.139          },
00:37:12.139          {
00:37:12.139            "method": "bdev_nvme_set_options",
00:37:12.139            "params": {
00:37:12.139              "action_on_timeout": "none",
00:37:12.139              "allow_accel_sequence": false,
00:37:12.139              "arbitration_burst": 0,
00:37:12.139              "bdev_retry_count": 3,
00:37:12.139              "ctrlr_loss_timeout_sec": 0,
00:37:12.139              "delay_cmd_submit": true,
00:37:12.139              "dhchap_dhgroups": [
00:37:12.139                "null",
00:37:12.139                "ffdhe2048",
00:37:12.139                "ffdhe3072",
00:37:12.139                "ffdhe4096",
00:37:12.139                "ffdhe6144",
00:37:12.139                "ffdhe8192"
00:37:12.139              ],
00:37:12.139              "dhchap_digests": [
00:37:12.139                "sha256",
00:37:12.139                "sha384",
00:37:12.139                "sha512"
00:37:12.139              ],
00:37:12.139              "disable_auto_failback": false,
00:37:12.139              "fast_io_fail_timeout_sec": 0,
00:37:12.139              "generate_uuids": false,
00:37:12.139              "high_priority_weight": 0,
00:37:12.139              "io_path_stat": false,
00:37:12.139              "io_queue_requests": 512,
00:37:12.139              "keep_alive_timeout_ms": 10000,
00:37:12.139              "low_priority_weight": 0,
00:37:12.139              "medium_priority_weight": 0,
00:37:12.139              "nvme_adminq_poll_period_us": 10000,
00:37:12.139              "nvme_error_stat": false,
00:37:12.139              "nvme_ioq_poll_period_us": 0,
00:37:12.139              "rdma_cm_event_timeout_ms": 0,
00:37:12.139              "rdma_max_cq_size": 0,
00:37:12.139              "rdma_srq_size": 0,
00:37:12.139              "rdma_umr_per_io": false,
00:37:12.139              "reconnect_delay_sec": 0,
00:37:12.139              "timeout_admin_us": 0,
00:37:12.139              "timeout_us": 0,
00:37:12.139              "transport_ack_timeout": 0,
00:37:12.139              "transport_retry_count": 4,
00:37:12.139              "transport_tos": 0
00:37:12.139            }
00:37:12.139          },
00:37:12.139          {
00:37:12.139            "method": "bdev_nvme_attach_controller",
00:37:12.139            "params": {
00:37:12.139              "adrfam": "IPv4",
00:37:12.139              "ctrlr_loss_timeout_sec": 0,
00:37:12.139              "ddgst": false,
00:37:12.139              "fast_io_fail_timeout_sec": 0,
00:37:12.139              "hdgst": false,
00:37:12.139              "hostnqn": "nqn.2016-06.io.spdk:host0",
00:37:12.139              "multipath": "multipath",
00:37:12.139              "name": "nvme0",
00:37:12.139              "prchk_guard": false,
00:37:12.139              "prchk_reftag": false,
00:37:12.139              "psk": "key0",
00:37:12.139              "reconnect_delay_sec": 0,
00:37:12.139              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:37:12.139              "traddr": "127.0.0.1",
00:37:12.139              "trsvcid": "4420",
00:37:12.139              "trtype": "TCP"
00:37:12.139            }
00:37:12.139          },
00:37:12.139          {
00:37:12.139            "method": "bdev_nvme_set_hotplug",
00:37:12.139            "params": {
00:37:12.139              "enable": false,
00:37:12.139              "period_us": 100000
00:37:12.139            }
00:37:12.139          },
00:37:12.139          {
00:37:12.139            "method": "bdev_wait_for_examine"
00:37:12.139          }
00:37:12.139        ]
00:37:12.139      },
00:37:12.139      {
00:37:12.139        "subsystem": "nbd",
00:37:12.139        "config": []
00:37:12.139      }
00:37:12.139    ]
00:37:12.139  }'
00:37:12.139   19:21:43 keyring_file -- common/autotest_common.sh@10 -- # set +x
00:37:12.139  [2024-12-13 19:21:43.934553] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:37:12.139  [2024-12-13 19:21:43.935006] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132765 ]
00:37:12.397  [2024-12-13 19:21:44.067170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:37:12.397  [2024-12-13 19:21:44.107353] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:37:12.655  [2024-12-13 19:21:44.284855] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:37:13.222   19:21:44 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:37:13.222   19:21:44 keyring_file -- common/autotest_common.sh@868 -- # return 0
00:37:13.222    19:21:44 keyring_file -- keyring/file.sh@121 -- # jq length
00:37:13.222    19:21:44 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys
00:37:13.222    19:21:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:37:13.486   19:21:45 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 ))
00:37:13.486    19:21:45 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0
00:37:13.486    19:21:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:37:13.486    19:21:45 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:37:13.486    19:21:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:37:13.486    19:21:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:37:13.486    19:21:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:37:13.744   19:21:45 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 ))
00:37:13.744    19:21:45 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1
00:37:13.744    19:21:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:37:13.744    19:21:45 keyring_file -- keyring/common.sh@12 -- # get_key key1
00:37:13.744    19:21:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:37:13.744    19:21:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")'
00:37:13.744    19:21:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:37:14.003   19:21:45 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 ))
00:37:14.003    19:21:45 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers
00:37:14.003    19:21:45 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name'
00:37:14.003    19:21:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers
00:37:14.262   19:21:46 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]]
00:37:14.262   19:21:46 keyring_file -- keyring/file.sh@1 -- # cleanup
00:37:14.262   19:21:46 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.KpGcAwZr6Q /tmp/tmp.UifDQ44S7j
00:37:14.262   19:21:46 keyring_file -- keyring/file.sh@20 -- # killprocess 132765
00:37:14.262   19:21:46 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 132765 ']'
00:37:14.262   19:21:46 keyring_file -- common/autotest_common.sh@958 -- # kill -0 132765
00:37:14.262    19:21:46 keyring_file -- common/autotest_common.sh@959 -- # uname
00:37:14.262   19:21:46 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:37:14.262    19:21:46 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 132765
00:37:14.262   19:21:46 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:37:14.262  killing process with pid 132765
00:37:14.262   19:21:46 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:37:14.262   19:21:46 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 132765'
00:37:14.262  Received shutdown signal, test time was about 1.000000 seconds
00:37:14.262  
00:37:14.262                                                                                                  Latency(us)
00:37:14.262  
[2024-12-13T19:21:46.086Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:37:14.262  
[2024-12-13T19:21:46.086Z]  ===================================================================================================================
00:37:14.262  
[2024-12-13T19:21:46.086Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:37:14.262   19:21:46 keyring_file -- common/autotest_common.sh@973 -- # kill 132765
00:37:14.262   19:21:46 keyring_file -- common/autotest_common.sh@978 -- # wait 132765
00:37:14.521   19:21:46 keyring_file -- keyring/file.sh@21 -- # killprocess 132284
00:37:14.521   19:21:46 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 132284 ']'
00:37:14.521   19:21:46 keyring_file -- common/autotest_common.sh@958 -- # kill -0 132284
00:37:14.521    19:21:46 keyring_file -- common/autotest_common.sh@959 -- # uname
00:37:14.521   19:21:46 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:37:14.521    19:21:46 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 132284
00:37:14.521   19:21:46 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:37:14.521  killing process with pid 132284
00:37:14.521   19:21:46 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:37:14.521   19:21:46 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 132284'
00:37:14.521   19:21:46 keyring_file -- common/autotest_common.sh@973 -- # kill 132284
00:37:14.521   19:21:46 keyring_file -- common/autotest_common.sh@978 -- # wait 132284
00:37:15.089  
00:37:15.089  real	0m15.244s
00:37:15.089  user	0m38.214s
00:37:15.089  sys	0m3.254s
00:37:15.089   19:21:46 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable
00:37:15.089  ************************************
00:37:15.089  END TEST keyring_file
00:37:15.089  ************************************
00:37:15.089   19:21:46 keyring_file -- common/autotest_common.sh@10 -- # set +x
00:37:15.089   19:21:46  -- spdk/autotest.sh@293 -- # [[ y == y ]]
00:37:15.089   19:21:46  -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh
00:37:15.089   19:21:46  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:37:15.089   19:21:46  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:37:15.089   19:21:46  -- common/autotest_common.sh@10 -- # set +x
00:37:15.089  ************************************
00:37:15.089  START TEST keyring_linux
00:37:15.089  ************************************
00:37:15.089   19:21:46 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh
00:37:15.089  Joined session keyring: 65772057
00:37:15.348  * Looking for test storage...
00:37:15.348  * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring
00:37:15.348    19:21:46 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:37:15.348     19:21:46 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version
00:37:15.348     19:21:46 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:37:15.348    19:21:47 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:37:15.348    19:21:47 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:37:15.348    19:21:47 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l
00:37:15.348    19:21:47 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l
00:37:15.348    19:21:47 keyring_linux -- scripts/common.sh@336 -- # IFS=.-:
00:37:15.348    19:21:47 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1
00:37:15.348    19:21:47 keyring_linux -- scripts/common.sh@337 -- # IFS=.-:
00:37:15.348    19:21:47 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2
00:37:15.348    19:21:47 keyring_linux -- scripts/common.sh@338 -- # local 'op=<'
00:37:15.348    19:21:47 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2
00:37:15.348    19:21:47 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1
00:37:15.348    19:21:47 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:37:15.348    19:21:47 keyring_linux -- scripts/common.sh@344 -- # case "$op" in
00:37:15.348    19:21:47 keyring_linux -- scripts/common.sh@345 -- # : 1
00:37:15.348    19:21:47 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 ))
00:37:15.348    19:21:47 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:37:15.348     19:21:47 keyring_linux -- scripts/common.sh@365 -- # decimal 1
00:37:15.348     19:21:47 keyring_linux -- scripts/common.sh@353 -- # local d=1
00:37:15.348     19:21:47 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:37:15.348     19:21:47 keyring_linux -- scripts/common.sh@355 -- # echo 1
00:37:15.348    19:21:47 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1
00:37:15.348     19:21:47 keyring_linux -- scripts/common.sh@366 -- # decimal 2
00:37:15.348     19:21:47 keyring_linux -- scripts/common.sh@353 -- # local d=2
00:37:15.348     19:21:47 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:37:15.348     19:21:47 keyring_linux -- scripts/common.sh@355 -- # echo 2
00:37:15.348    19:21:47 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2
00:37:15.348    19:21:47 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:37:15.348    19:21:47 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:37:15.348    19:21:47 keyring_linux -- scripts/common.sh@368 -- # return 0
00:37:15.348    19:21:47 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:37:15.348    19:21:47 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:37:15.348  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:37:15.348  		--rc genhtml_branch_coverage=1
00:37:15.348  		--rc genhtml_function_coverage=1
00:37:15.348  		--rc genhtml_legend=1
00:37:15.348  		--rc geninfo_all_blocks=1
00:37:15.348  		--rc geninfo_unexecuted_blocks=1
00:37:15.348  		
00:37:15.348  		'
00:37:15.348    19:21:47 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:37:15.348  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:37:15.348  		--rc genhtml_branch_coverage=1
00:37:15.348  		--rc genhtml_function_coverage=1
00:37:15.348  		--rc genhtml_legend=1
00:37:15.348  		--rc geninfo_all_blocks=1
00:37:15.348  		--rc geninfo_unexecuted_blocks=1
00:37:15.348  		
00:37:15.348  		'
00:37:15.348    19:21:47 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:37:15.348  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:37:15.348  		--rc genhtml_branch_coverage=1
00:37:15.348  		--rc genhtml_function_coverage=1
00:37:15.348  		--rc genhtml_legend=1
00:37:15.348  		--rc geninfo_all_blocks=1
00:37:15.348  		--rc geninfo_unexecuted_blocks=1
00:37:15.348  		
00:37:15.348  		'
00:37:15.348    19:21:47 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:37:15.348  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:37:15.348  		--rc genhtml_branch_coverage=1
00:37:15.348  		--rc genhtml_function_coverage=1
00:37:15.348  		--rc genhtml_legend=1
00:37:15.348  		--rc geninfo_all_blocks=1
00:37:15.348  		--rc geninfo_unexecuted_blocks=1
00:37:15.348  		
00:37:15.348  		'
00:37:15.348   19:21:47 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh
00:37:15.348    19:21:47 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:37:15.348      19:21:47 keyring_linux -- nvmf/common.sh@7 -- # uname -s
00:37:15.348     19:21:47 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:37:15.348     19:21:47 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:37:15.348     19:21:47 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:37:15.348     19:21:47 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:37:15.348     19:21:47 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:37:15.348     19:21:47 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:37:15.348     19:21:47 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:37:15.348     19:21:47 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:37:15.349     19:21:47 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:37:15.349      19:21:47 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:37:15.349     19:21:47 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:37:15.349     19:21:47 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=8f716199-a3ae-4f70-9a3a-0556e5b7497a
00:37:15.349     19:21:47 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:37:15.349     19:21:47 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:37:15.349     19:21:47 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt
00:37:15.349     19:21:47 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:37:15.349     19:21:47 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:37:15.349      19:21:47 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob
00:37:15.349      19:21:47 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:37:15.349      19:21:47 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:37:15.349      19:21:47 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:37:15.349       19:21:47 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:37:15.349       19:21:47 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:37:15.349       19:21:47 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:37:15.349       19:21:47 keyring_linux -- paths/export.sh@5 -- # export PATH
00:37:15.349       19:21:47 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:37:15.349     19:21:47 keyring_linux -- nvmf/common.sh@51 -- # : 0
00:37:15.349     19:21:47 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:37:15.349     19:21:47 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:37:15.349     19:21:47 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:37:15.349     19:21:47 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:37:15.349     19:21:47 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:37:15.349     19:21:47 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:37:15.349  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:37:15.349     19:21:47 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:37:15.349     19:21:47 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:37:15.349     19:21:47 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0
00:37:15.349    19:21:47 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock
00:37:15.349   19:21:47 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0
00:37:15.349   19:21:47 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0
00:37:15.349   19:21:47 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff
00:37:15.349   19:21:47 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00
00:37:15.349   19:21:47 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT
00:37:15.349   19:21:47 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0
00:37:15.349   19:21:47 keyring_linux -- keyring/common.sh@15 -- # local name key digest path
00:37:15.349   19:21:47 keyring_linux -- keyring/common.sh@17 -- # name=key0
00:37:15.349   19:21:47 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff
00:37:15.349   19:21:47 keyring_linux -- keyring/common.sh@17 -- # digest=0
00:37:15.349   19:21:47 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0
00:37:15.349   19:21:47 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0
00:37:15.349   19:21:47 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0
00:37:15.349   19:21:47 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest
00:37:15.349   19:21:47 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:37:15.349   19:21:47 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff
00:37:15.349   19:21:47 keyring_linux -- nvmf/common.sh@732 -- # digest=0
00:37:15.349   19:21:47 keyring_linux -- nvmf/common.sh@733 -- # python -
00:37:15.349   19:21:47 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0
00:37:15.349  /tmp/:spdk-test:key0
00:37:15.349   19:21:47 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0
00:37:15.349   19:21:47 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1
00:37:15.349   19:21:47 keyring_linux -- keyring/common.sh@15 -- # local name key digest path
00:37:15.349   19:21:47 keyring_linux -- keyring/common.sh@17 -- # name=key1
00:37:15.349   19:21:47 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00
00:37:15.349   19:21:47 keyring_linux -- keyring/common.sh@17 -- # digest=0
00:37:15.349   19:21:47 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1
00:37:15.349   19:21:47 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0
00:37:15.349   19:21:47 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0
00:37:15.349   19:21:47 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest
00:37:15.349   19:21:47 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:37:15.349   19:21:47 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00
00:37:15.349   19:21:47 keyring_linux -- nvmf/common.sh@732 -- # digest=0
00:37:15.349   19:21:47 keyring_linux -- nvmf/common.sh@733 -- # python -
00:37:15.349   19:21:47 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1
00:37:15.349  /tmp/:spdk-test:key1
00:37:15.349   19:21:47 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1
00:37:15.349   19:21:47 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=132923
00:37:15.349   19:21:47 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:37:15.349   19:21:47 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 132923
00:37:15.349   19:21:47 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 132923 ']'
00:37:15.349   19:21:47 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:37:15.349   19:21:47 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100
00:37:15.349   19:21:47 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:37:15.349  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:37:15.349   19:21:47 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable
00:37:15.349   19:21:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x
00:37:15.608  [2024-12-13 19:21:47.219868] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:37:15.608  [2024-12-13 19:21:47.219986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132923 ]
00:37:15.609  [2024-12-13 19:21:47.369512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:37:15.609  [2024-12-13 19:21:47.406569] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:37:16.176   19:21:47 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:37:16.176   19:21:47 keyring_linux -- common/autotest_common.sh@868 -- # return 0
00:37:16.176   19:21:47 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd
00:37:16.176   19:21:47 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable
00:37:16.176   19:21:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x
00:37:16.176  [2024-12-13 19:21:47.758249] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:37:16.176  null0
00:37:16.176  [2024-12-13 19:21:47.790171] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:37:16.176  [2024-12-13 19:21:47.790404] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:37:16.176   19:21:47 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:37:16.176   19:21:47 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s
00:37:16.176  906659757
00:37:16.176   19:21:47 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s
00:37:16.176  778143634
00:37:16.176   19:21:47 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=132950
00:37:16.176   19:21:47 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc
00:37:16.176   19:21:47 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 132950 /var/tmp/bperf.sock
00:37:16.176   19:21:47 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 132950 ']'
00:37:16.176   19:21:47 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:37:16.176   19:21:47 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100
00:37:16.176  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:37:16.176   19:21:47 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:37:16.176   19:21:47 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable
00:37:16.176   19:21:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x
00:37:16.176  [2024-12-13 19:21:47.864650] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization...
00:37:16.176  [2024-12-13 19:21:47.864751] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132950 ]
00:37:16.435  [2024-12-13 19:21:48.011676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:37:16.435  [2024-12-13 19:21:48.050186] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:37:16.435   19:21:48 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:37:16.435   19:21:48 keyring_linux -- common/autotest_common.sh@868 -- # return 0
00:37:16.435   19:21:48 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable
00:37:16.435   19:21:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable
00:37:16.694   19:21:48 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init
00:37:16.694   19:21:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init
00:37:16.953   19:21:48 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0
00:37:16.953   19:21:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0
00:37:17.211  [2024-12-13 19:21:48.914486] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:37:17.211  nvme0n1
00:37:17.211   19:21:49 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0
00:37:17.211   19:21:49 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0
00:37:17.211   19:21:49 keyring_linux -- keyring/linux.sh@20 -- # local sn
00:37:17.211    19:21:49 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys
00:37:17.211    19:21:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:37:17.211    19:21:49 keyring_linux -- keyring/linux.sh@22 -- # jq length
00:37:17.779   19:21:49 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count ))
00:37:17.779   19:21:49 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 ))
00:37:17.779    19:21:49 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0
00:37:17.779    19:21:49 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn
00:37:17.779    19:21:49 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")'
00:37:17.779    19:21:49 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:37:17.779    19:21:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:37:18.038   19:21:49 keyring_linux -- keyring/linux.sh@25 -- # sn=906659757
00:37:18.038    19:21:49 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0
00:37:18.038    19:21:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0
00:37:18.038   19:21:49 keyring_linux -- keyring/linux.sh@26 -- # [[ 906659757 == \9\0\6\6\5\9\7\5\7 ]]
00:37:18.038    19:21:49 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 906659757
00:37:18.038   19:21:49 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]]
00:37:18.038   19:21:49 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:37:18.038  Running I/O for 1 seconds...
00:37:18.973      13461.00 IOPS,    52.58 MiB/s
00:37:18.973                                                                                                  Latency(us)
00:37:18.973  
[2024-12-13T19:21:50.797Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:37:18.973  Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096)
00:37:18.973  	 nvme0n1             :       1.01   13463.90      52.59       0.00     0.00    9457.09    2993.80   13345.51
00:37:18.973  
[2024-12-13T19:21:50.797Z]  ===================================================================================================================
00:37:18.973  
[2024-12-13T19:21:50.797Z]  Total                       :              13463.90      52.59       0.00     0.00    9457.09    2993.80   13345.51
00:37:18.973  {
00:37:18.973    "results": [
00:37:18.973      {
00:37:18.973        "job": "nvme0n1",
00:37:18.973        "core_mask": "0x2",
00:37:18.973        "workload": "randread",
00:37:18.973        "status": "finished",
00:37:18.973        "queue_depth": 128,
00:37:18.973        "io_size": 4096,
00:37:18.973        "runtime": 1.009366,
00:37:18.973        "iops": 13463.897139392451,
00:37:18.973        "mibps": 52.59334820075176,
00:37:18.973        "io_failed": 0,
00:37:18.973        "io_timeout": 0,
00:37:18.973        "avg_latency_us": 9457.08558967155,
00:37:18.973        "min_latency_us": 2993.8036363636365,
00:37:18.973        "max_latency_us": 13345.512727272728
00:37:18.973      }
00:37:18.973    ],
00:37:18.973    "core_count": 1
00:37:18.973  }
00:37:18.973   19:21:50 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0
00:37:18.973   19:21:50 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0
00:37:19.541   19:21:51 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0
00:37:19.541   19:21:51 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name=
00:37:19.541   19:21:51 keyring_linux -- keyring/linux.sh@20 -- # local sn
00:37:19.541    19:21:51 keyring_linux -- keyring/linux.sh@22 -- # jq length
00:37:19.541    19:21:51 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys
00:37:19.541    19:21:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:37:19.541   19:21:51 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count ))
00:37:19.541   19:21:51 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 ))
00:37:19.541   19:21:51 keyring_linux -- keyring/linux.sh@23 -- # return
00:37:19.541   19:21:51 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1
00:37:19.541   19:21:51 keyring_linux -- common/autotest_common.sh@652 -- # local es=0
00:37:19.541   19:21:51 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1
00:37:19.541   19:21:51 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd
00:37:19.541   19:21:51 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:37:19.541    19:21:51 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd
00:37:19.541   19:21:51 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:37:19.541   19:21:51 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1
00:37:19.541   19:21:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1
00:37:20.108  [2024-12-13 19:21:51.623676] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected
00:37:20.108  [2024-12-13 19:21:51.624343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf883b0 (107): Transport endpoint is not connected
00:37:20.108  [2024-12-13 19:21:51.625330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf883b0 (9): Bad file descriptor
00:37:20.108  [2024-12-13 19:21:51.626327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state
00:37:20.108  [2024-12-13 19:21:51.626365] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1
00:37:20.108  [2024-12-13 19:21:51.626374] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted
00:37:20.108  [2024-12-13 19:21:51.626385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state.
00:37:20.108  2024/12/13 19:21:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error
00:37:20.108  request:
00:37:20.108  {
00:37:20.108    "method": "bdev_nvme_attach_controller",
00:37:20.108    "params": {
00:37:20.108      "name": "nvme0",
00:37:20.108      "trtype": "tcp",
00:37:20.108      "traddr": "127.0.0.1",
00:37:20.108      "adrfam": "ipv4",
00:37:20.108      "trsvcid": "4420",
00:37:20.108      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:37:20.109      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:37:20.109      "prchk_reftag": false,
00:37:20.109      "prchk_guard": false,
00:37:20.109      "hdgst": false,
00:37:20.109      "ddgst": false,
00:37:20.109      "psk": ":spdk-test:key1",
00:37:20.109      "allow_unrecognized_csi": false
00:37:20.109    }
00:37:20.109  }
00:37:20.109  Got JSON-RPC error response
00:37:20.109  GoRPCClient: error on JSON-RPC call
00:37:20.109   19:21:51 keyring_linux -- common/autotest_common.sh@655 -- # es=1
00:37:20.109   19:21:51 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:37:20.109   19:21:51 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:37:20.109   19:21:51 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:37:20.109   19:21:51 keyring_linux -- keyring/linux.sh@1 -- # cleanup
00:37:20.109   19:21:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1
00:37:20.109   19:21:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0
00:37:20.109   19:21:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn
00:37:20.109    19:21:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0
00:37:20.109    19:21:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0
00:37:20.109   19:21:51 keyring_linux -- keyring/linux.sh@33 -- # sn=906659757
00:37:20.109   19:21:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 906659757
00:37:20.109  1 links removed
00:37:20.109   19:21:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1
00:37:20.109   19:21:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1
00:37:20.109   19:21:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn
00:37:20.109    19:21:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1
00:37:20.109    19:21:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1
00:37:20.109   19:21:51 keyring_linux -- keyring/linux.sh@33 -- # sn=778143634
00:37:20.109   19:21:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 778143634
00:37:20.109  1 links removed
00:37:20.109   19:21:51 keyring_linux -- keyring/linux.sh@41 -- # killprocess 132950
00:37:20.109   19:21:51 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 132950 ']'
00:37:20.109   19:21:51 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 132950
00:37:20.109    19:21:51 keyring_linux -- common/autotest_common.sh@959 -- # uname
00:37:20.109   19:21:51 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:37:20.109    19:21:51 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 132950
00:37:20.109   19:21:51 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:37:20.109   19:21:51 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:37:20.109  killing process with pid 132950
00:37:20.109   19:21:51 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 132950'
00:37:20.109   19:21:51 keyring_linux -- common/autotest_common.sh@973 -- # kill 132950
00:37:20.109  Received shutdown signal, test time was about 1.000000 seconds
00:37:20.109  
00:37:20.109                                                                                                  Latency(us)
00:37:20.109  
[2024-12-13T19:21:51.933Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:37:20.109  
[2024-12-13T19:21:51.933Z]  ===================================================================================================================
00:37:20.109  
[2024-12-13T19:21:51.933Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:37:20.109   19:21:51 keyring_linux -- common/autotest_common.sh@978 -- # wait 132950
00:37:20.109   19:21:51 keyring_linux -- keyring/linux.sh@42 -- # killprocess 132923
00:37:20.109   19:21:51 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 132923 ']'
00:37:20.109   19:21:51 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 132923
00:37:20.109    19:21:51 keyring_linux -- common/autotest_common.sh@959 -- # uname
00:37:20.109   19:21:51 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:37:20.109    19:21:51 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 132923
00:37:20.109   19:21:51 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:37:20.109   19:21:51 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:37:20.109  killing process with pid 132923
00:37:20.109   19:21:51 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 132923'
00:37:20.109   19:21:51 keyring_linux -- common/autotest_common.sh@973 -- # kill 132923
00:37:20.109   19:21:51 keyring_linux -- common/autotest_common.sh@978 -- # wait 132923
00:37:20.700  ************************************
00:37:20.700  END TEST keyring_linux
00:37:20.700  ************************************
00:37:20.700  
00:37:20.700  real	0m5.603s
00:37:20.700  user	0m10.597s
00:37:20.700  sys	0m1.733s
00:37:20.700   19:21:52 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable
00:37:20.700   19:21:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x
00:37:20.700   19:21:52  -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']'
00:37:20.700   19:21:52  -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']'
00:37:20.700   19:21:52  -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']'
00:37:20.700   19:21:52  -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']'
00:37:20.700   19:21:52  -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']'
00:37:20.700   19:21:52  -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']'
00:37:20.700   19:21:52  -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']'
00:37:20.700   19:21:52  -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']'
00:37:20.700   19:21:52  -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']'
00:37:20.700   19:21:52  -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']'
00:37:20.700   19:21:52  -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']'
00:37:20.700   19:21:52  -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]]
00:37:20.700   19:21:52  -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]]
00:37:20.700   19:21:52  -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]]
00:37:20.700   19:21:52  -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]]
00:37:20.700   19:21:52  -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT
00:37:20.700   19:21:52  -- spdk/autotest.sh@387 -- # timing_enter post_cleanup
00:37:20.700   19:21:52  -- common/autotest_common.sh@726 -- # xtrace_disable
00:37:20.700   19:21:52  -- common/autotest_common.sh@10 -- # set +x
00:37:20.700   19:21:52  -- spdk/autotest.sh@388 -- # autotest_cleanup
00:37:20.700   19:21:52  -- common/autotest_common.sh@1396 -- # local autotest_es=0
00:37:20.700   19:21:52  -- common/autotest_common.sh@1397 -- # xtrace_disable
00:37:20.700   19:21:52  -- common/autotest_common.sh@10 -- # set +x
00:37:22.648  INFO: APP EXITING
00:37:22.648  INFO: killing all VMs
00:37:22.648  INFO: killing vhost app
00:37:22.648  INFO: EXIT DONE
00:37:23.583  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:37:23.583  0000:00:11.0 (1b36 0010): Already using the nvme driver
00:37:23.583  0000:00:10.0 (1b36 0010): Already using the nvme driver
00:37:24.149  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:37:24.149  Cleaning
00:37:24.149  Removing:    /var/run/dpdk/spdk0/config
00:37:24.149  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0
00:37:24.149  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1
00:37:24.149  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2
00:37:24.149  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3
00:37:24.149  Removing:    /var/run/dpdk/spdk0/fbarray_memzone
00:37:24.149  Removing:    /var/run/dpdk/spdk0/hugepage_info
00:37:24.149  Removing:    /var/run/dpdk/spdk1/config
00:37:24.149  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0
00:37:24.408  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1
00:37:24.408  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2
00:37:24.408  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3
00:37:24.408  Removing:    /var/run/dpdk/spdk1/fbarray_memzone
00:37:24.408  Removing:    /var/run/dpdk/spdk1/hugepage_info
00:37:24.408  Removing:    /var/run/dpdk/spdk2/config
00:37:24.408  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0
00:37:24.408  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1
00:37:24.408  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2
00:37:24.408  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3
00:37:24.408  Removing:    /var/run/dpdk/spdk2/fbarray_memzone
00:37:24.408  Removing:    /var/run/dpdk/spdk2/hugepage_info
00:37:24.408  Removing:    /var/run/dpdk/spdk3/config
00:37:24.408  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0
00:37:24.408  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1
00:37:24.408  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2
00:37:24.408  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3
00:37:24.408  Removing:    /var/run/dpdk/spdk3/fbarray_memzone
00:37:24.408  Removing:    /var/run/dpdk/spdk3/hugepage_info
00:37:24.408  Removing:    /var/run/dpdk/spdk4/config
00:37:24.408  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0
00:37:24.408  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1
00:37:24.408  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2
00:37:24.408  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3
00:37:24.408  Removing:    /var/run/dpdk/spdk4/fbarray_memzone
00:37:24.408  Removing:    /var/run/dpdk/spdk4/hugepage_info
00:37:24.408  Removing:    /dev/shm/nvmf_trace.0
00:37:24.408  Removing:    /dev/shm/spdk_tgt_trace.pid73355
00:37:24.408  Removing:    /var/run/dpdk/spdk0
00:37:24.408  Removing:    /var/run/dpdk/spdk1
00:37:24.408  Removing:    /var/run/dpdk/spdk2
00:37:24.408  Removing:    /var/run/dpdk/spdk3
00:37:24.408  Removing:    /var/run/dpdk/spdk4
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid101289
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid101778
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid101886
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid102039
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid102075
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid102132
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid102171
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid102349
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid102490
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid102753
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid102865
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid103122
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid103220
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid103337
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid103716
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid104152
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid104153
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid104154
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid104425
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid104683
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid104685
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid107002
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid107434
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid107793
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid108375
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid108377
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid108759
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid108773
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid108787
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid108818
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid108823
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid108968
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid108975
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid109078
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid109080
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid109183
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid109191
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid109694
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid109738
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid109895
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid110013
00:37:24.408  Removing:    /var/run/dpdk/spdk_pid110461
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid110706
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid111226
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid111823
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid113222
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid113854
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid113862
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid115884
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid115962
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid116047
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid116125
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid116281
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid116354
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid116444
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid116530
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid116932
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid117674
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid119070
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid119260
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid119541
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid120062
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid120421
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid122843
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid122888
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid123250
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid123295
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid123687
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid124254
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid124687
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid125677
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid126689
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid126796
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid126863
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid128445
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid128757
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid129088
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid129642
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid129647
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid130053
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid130208
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid130364
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid130457
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid130607
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid130716
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid131444
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid131475
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid131505
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid131759
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid131790
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid131826
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid132284
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid132311
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid132765
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid132923
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid132950
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid73201
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid73355
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid73611
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid73703
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid73729
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid73833
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid73850
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid73989
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid74269
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid74453
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid74537
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid74624
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid74708
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid74746
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid74782
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid74846
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid74951
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid75572
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid75636
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid75687
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid75707
00:37:24.667  Removing:    /var/run/dpdk/spdk_pid75786
00:37:24.668  Removing:    /var/run/dpdk/spdk_pid75795
00:37:24.668  Removing:    /var/run/dpdk/spdk_pid75876
00:37:24.668  Removing:    /var/run/dpdk/spdk_pid75895
00:37:24.668  Removing:    /var/run/dpdk/spdk_pid75942
00:37:24.668  Removing:    /var/run/dpdk/spdk_pid75964
00:37:24.668  Removing:    /var/run/dpdk/spdk_pid76010
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid76021
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid76180
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid76210
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid76298
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid76758
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid77127
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid79638
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid79689
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid80032
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid80078
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid80478
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid81057
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid81484
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid82517
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid83574
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid83691
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid83759
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid85355
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid85703
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid92886
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid93321
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid93933
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid94408
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid94410
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid94468
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid94527
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid94588
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid94626
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid94633
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid94659
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid94698
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid94700
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid94764
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid94823
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid94880
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid94923
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid94931
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid94956
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid95261
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid95403
00:37:24.926  Removing:    /var/run/dpdk/spdk_pid95634
00:37:24.926  Clean
00:37:24.926   19:21:56  -- common/autotest_common.sh@1453 -- # return 0
00:37:24.926   19:21:56  -- spdk/autotest.sh@389 -- # timing_exit post_cleanup
00:37:24.926   19:21:56  -- common/autotest_common.sh@732 -- # xtrace_disable
00:37:24.926   19:21:56  -- common/autotest_common.sh@10 -- # set +x
00:37:24.926   19:21:56  -- spdk/autotest.sh@391 -- # timing_exit autotest
00:37:24.926   19:21:56  -- common/autotest_common.sh@732 -- # xtrace_disable
00:37:24.926   19:21:56  -- common/autotest_common.sh@10 -- # set +x
00:37:25.185   19:21:56  -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:37:25.185   19:21:56  -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]]
00:37:25.185   19:21:56  -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log
00:37:25.185   19:21:56  -- spdk/autotest.sh@396 -- # [[ y == y ]]
00:37:25.185    19:21:56  -- spdk/autotest.sh@398 -- # hostname
00:37:25.185   19:21:56  -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info
00:37:25.185  geninfo: WARNING: invalid characters removed from testname!
00:37:51.726   19:22:19  -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:37:51.726   19:22:22  -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:37:54.256   19:22:25  -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:37:56.787   19:22:28  -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:37:59.316   19:22:30  -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:38:01.218   19:22:32  -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:38:03.749   19:22:35  -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR
00:38:03.749   19:22:35  -- spdk/autorun.sh@1 -- $ timing_finish
00:38:03.749   19:22:35  -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]]
00:38:03.749   19:22:35  -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:38:03.749   19:22:35  -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]]
00:38:03.749   19:22:35  -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:38:03.749  + [[ -n 5988 ]]
00:38:03.749  + sudo kill 5988
00:38:03.758  [Pipeline] }
00:38:03.772  [Pipeline] // timeout
00:38:03.776  [Pipeline] }
00:38:03.790  [Pipeline] // stage
00:38:03.794  [Pipeline] }
00:38:03.807  [Pipeline] // catchError
00:38:03.815  [Pipeline] stage
00:38:03.817  [Pipeline] { (Stop VM)
00:38:03.828  [Pipeline] sh
00:38:04.106  + vagrant halt
00:38:06.671  ==> default: Halting domain...
00:38:13.253  [Pipeline] sh
00:38:13.533  + vagrant destroy -f
00:38:16.818  ==> default: Removing domain...
00:38:16.831  [Pipeline] sh
00:38:17.111  + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output
00:38:17.120  [Pipeline] }
00:38:17.135  [Pipeline] // stage
00:38:17.141  [Pipeline] }
00:38:17.155  [Pipeline] // dir
00:38:17.160  [Pipeline] }
00:38:17.176  [Pipeline] // wrap
00:38:17.182  [Pipeline] }
00:38:17.194  [Pipeline] // catchError
00:38:17.204  [Pipeline] stage
00:38:17.207  [Pipeline] { (Epilogue)
00:38:17.220  [Pipeline] sh
00:38:17.502  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:38:22.781  [Pipeline] catchError
00:38:22.783  [Pipeline] {
00:38:22.797  [Pipeline] sh
00:38:23.079  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:38:23.337  Artifacts sizes are good
00:38:23.346  [Pipeline] }
00:38:23.360  [Pipeline] // catchError
00:38:23.371  [Pipeline] archiveArtifacts
00:38:23.378  Archiving artifacts
00:38:23.529  [Pipeline] cleanWs
00:38:23.537  [WS-CLEANUP] Deleting project workspace...
00:38:23.537  [WS-CLEANUP] Deferred wipeout is used...
00:38:23.542  [WS-CLEANUP] done
00:38:23.543  [Pipeline] }
00:38:23.555  [Pipeline] // stage
00:38:23.557  [Pipeline] }
00:38:23.566  [Pipeline] // node
00:38:23.570  [Pipeline] End of Pipeline
00:38:23.593  Finished: SUCCESS